2025-09-10 00:00:07.781366 | Job console starting 2025-09-10 00:00:07.792813 | Updating git repos 2025-09-10 00:00:07.854141 | Cloning repos into workspace 2025-09-10 00:00:08.003730 | Restoring repo states 2025-09-10 00:00:08.023275 | Merging changes 2025-09-10 00:00:08.023293 | Checking out repos 2025-09-10 00:00:08.370529 | Preparing playbooks 2025-09-10 00:00:09.078573 | Running Ansible setup 2025-09-10 00:00:14.298331 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-10 00:00:15.643422 | 2025-09-10 00:00:15.643551 | PLAY [Base pre] 2025-09-10 00:00:15.697881 | 2025-09-10 00:00:15.698004 | TASK [Setup log path fact] 2025-09-10 00:00:15.753692 | orchestrator | ok 2025-09-10 00:00:15.812835 | 2025-09-10 00:00:15.812974 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-10 00:00:15.858153 | orchestrator | ok 2025-09-10 00:00:15.891202 | 2025-09-10 00:00:15.891323 | TASK [emit-job-header : Print job information] 2025-09-10 00:00:15.970556 | # Job Information 2025-09-10 00:00:15.970706 | Ansible Version: 2.16.14 2025-09-10 00:00:15.970740 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-10 00:00:15.970772 | Pipeline: periodic-midnight 2025-09-10 00:00:15.970794 | Executor: 521e9411259a 2025-09-10 00:00:15.970815 | Triggered by: https://github.com/osism/testbed 2025-09-10 00:00:15.970851 | Event ID: b11eb8c2e7374a229adff60f6f414ec6 2025-09-10 00:00:15.983499 | 2025-09-10 00:00:15.983605 | LOOP [emit-job-header : Print node information] 2025-09-10 00:00:16.185318 | orchestrator | ok: 2025-09-10 00:00:16.185472 | orchestrator | # Node Information 2025-09-10 00:00:16.185504 | orchestrator | Inventory Hostname: orchestrator 2025-09-10 00:00:16.185528 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-10 00:00:16.185550 | orchestrator | Username: zuul-testbed03 2025-09-10 00:00:16.185571 | orchestrator | Distro: Debian 12.12 2025-09-10 00:00:16.185594 | orchestrator | Provider: static-testbed 2025-09-10 00:00:16.185615 | orchestrator | Region: 2025-09-10 00:00:16.185635 | orchestrator | Label: testbed-orchestrator 2025-09-10 00:00:16.185655 | orchestrator | Product Name: OpenStack Nova 2025-09-10 00:00:16.185674 | orchestrator | Interface IP: 81.163.193.140 2025-09-10 00:00:16.207052 | 2025-09-10 00:00:16.207159 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-10 00:00:17.478149 | orchestrator -> localhost | changed 2025-09-10 00:00:17.484811 | 2025-09-10 00:00:17.484907 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-10 00:00:19.739236 | orchestrator -> localhost | changed 2025-09-10 00:00:19.752544 | 2025-09-10 00:00:19.752644 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-10 00:00:20.548941 | orchestrator -> localhost | ok 2025-09-10 00:00:20.554672 | 2025-09-10 00:00:20.554762 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-10 00:00:20.593750 | orchestrator | ok 2025-09-10 00:00:20.639643 | orchestrator | included: /var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-10 00:00:20.676719 | 2025-09-10 00:00:20.677026 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-10 00:00:22.722896 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-10 00:00:22.723076 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/work/dcf21b5b42194a42935d9fb9db71fe30_id_rsa 2025-09-10 00:00:22.723107 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/work/dcf21b5b42194a42935d9fb9db71fe30_id_rsa.pub 2025-09-10 00:00:22.723129 | orchestrator -> localhost | The key fingerprint is: 2025-09-10 00:00:22.723151 | orchestrator -> localhost | SHA256:bM4vlt6NRD2Kc0xP94EyrhENs+0JknLEmCl2dU8CyFg zuul-build-sshkey 2025-09-10 00:00:22.723181 | orchestrator -> localhost | The key's randomart image is: 2025-09-10 00:00:22.723206 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-10 00:00:22.723225 | orchestrator -> localhost | | +E.o.o . | 2025-09-10 00:00:22.723243 | orchestrator -> localhost | | . o* . + | 2025-09-10 00:00:22.723260 | orchestrator -> localhost | | o = o o . | 2025-09-10 00:00:22.723277 | orchestrator -> localhost | | . o ... *. . | 2025-09-10 00:00:22.723293 | orchestrator -> localhost | | . +S+o=+... | 2025-09-10 00:00:22.723312 | orchestrator -> localhost | | o+.==++o ..| 2025-09-10 00:00:22.723329 | orchestrator -> localhost | | =o=+. .| 2025-09-10 00:00:22.723345 | orchestrator -> localhost | | +*oo | 2025-09-10 00:00:22.723362 | orchestrator -> localhost | | o.o+ . | 2025-09-10 00:00:22.723379 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-10 00:00:22.723419 | orchestrator -> localhost | ok: Runtime: 0:00:01.249019 2025-09-10 00:00:22.729374 | 2025-09-10 00:00:22.729456 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-10 00:00:22.766418 | orchestrator | ok 2025-09-10 00:00:22.774477 | orchestrator | included: /var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-10 00:00:22.804703 | 2025-09-10 00:00:22.804801 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-10 00:00:22.835089 | orchestrator | skipping: Conditional result was False 2025-09-10 00:00:22.841842 | 2025-09-10 00:00:22.841925 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-10 00:00:23.701079 | orchestrator | changed 2025-09-10 00:00:23.716049 | 2025-09-10 00:00:23.716131 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-10 00:00:24.049751 | orchestrator | ok 2025-09-10 00:00:24.060243 | 2025-09-10 00:00:24.060329 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-10 00:00:24.441861 | orchestrator | ok 2025-09-10 00:00:24.451544 | 2025-09-10 00:00:24.451645 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-10 00:00:24.923546 | orchestrator | ok 2025-09-10 00:00:24.932252 | 2025-09-10 00:00:24.932334 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-10 00:00:24.976730 | orchestrator | skipping: Conditional result was False 2025-09-10 00:00:24.982313 | 2025-09-10 00:00:24.982403 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-10 00:00:26.028147 | orchestrator -> localhost | changed 2025-09-10 00:00:26.041986 | 2025-09-10 00:00:26.042077 | TASK [add-build-sshkey : Add back temp key] 2025-09-10 00:00:26.876392 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/work/dcf21b5b42194a42935d9fb9db71fe30_id_rsa (zuul-build-sshkey) 2025-09-10 00:00:26.876566 | orchestrator -> localhost | ok: Runtime: 0:00:00.019847 2025-09-10 00:00:26.882365 | 2025-09-10 00:00:26.882444 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-10 00:00:27.400009 | orchestrator | ok 2025-09-10 00:00:27.404873 | 2025-09-10 00:00:27.404955 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-10 00:00:27.475310 | orchestrator | skipping: Conditional result was False 2025-09-10 00:00:27.601816 | 2025-09-10 00:00:27.601941 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-10 00:00:28.104254 | orchestrator | ok 2025-09-10 00:00:28.127543 | 2025-09-10 00:00:28.127643 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-10 00:00:28.175506 | orchestrator | ok 2025-09-10 00:00:28.181183 | 2025-09-10 00:00:28.181266 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-10 00:00:28.874335 | orchestrator -> localhost | ok 2025-09-10 00:00:28.881205 | 2025-09-10 00:00:28.881287 | TASK [validate-host : Collect information about the host] 2025-09-10 00:00:30.525068 | orchestrator | ok 2025-09-10 00:00:30.538431 | 2025-09-10 00:00:30.538526 | TASK [validate-host : Sanitize hostname] 2025-09-10 00:00:30.626355 | orchestrator | ok 2025-09-10 00:00:30.631392 | 2025-09-10 00:00:30.631903 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-10 00:00:32.241776 | orchestrator -> localhost | changed 2025-09-10 00:00:32.246825 | 2025-09-10 00:00:32.246925 | TASK [validate-host : Collect information about zuul worker] 2025-09-10 00:00:32.856911 | orchestrator | ok 2025-09-10 00:00:32.861038 | 2025-09-10 00:00:32.861113 | TASK [validate-host : Write out all zuul information for each host] 2025-09-10 00:00:33.819540 | orchestrator -> localhost | changed 2025-09-10 00:00:33.831338 | 2025-09-10 00:00:33.831433 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-10 00:00:34.154275 | orchestrator | ok 2025-09-10 00:00:34.159161 | 2025-09-10 00:00:34.159240 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-10 00:01:28.138359 | orchestrator | changed: 2025-09-10 00:01:28.138584 | orchestrator | .d..t...... src/ 2025-09-10 00:01:28.138620 | orchestrator | .d..t...... src/github.com/ 2025-09-10 00:01:28.138644 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-10 00:01:28.138666 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-10 00:01:28.138688 | orchestrator | RedHat.yml 2025-09-10 00:01:28.151760 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-10 00:01:28.151778 | orchestrator | RedHat.yml 2025-09-10 00:01:28.151830 | orchestrator | = 1.53.0"... 2025-09-10 00:01:47.102133 | orchestrator | 00:01:47.101 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-10 00:01:47.978669 | orchestrator | 00:01:47.978 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-10 00:01:48.601477 | orchestrator | 00:01:48.601 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-10 00:01:48.672622 | orchestrator | 00:01:48.672 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-10 00:01:49.422864 | orchestrator | 00:01:49.422 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-10 00:01:49.489669 | orchestrator | 00:01:49.489 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-10 00:01:49.940211 | orchestrator | 00:01:49.939 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-10 00:01:49.940265 | orchestrator | 00:01:49.940 STDOUT terraform: Providers are signed by their developers. 2025-09-10 00:01:49.940282 | orchestrator | 00:01:49.940 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-10 00:01:49.940289 | orchestrator | 00:01:49.940 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-10 00:01:49.940295 | orchestrator | 00:01:49.940 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-10 00:01:49.940345 | orchestrator | 00:01:49.940 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-10 00:01:49.940668 | orchestrator | 00:01:49.940 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-10 00:01:49.940714 | orchestrator | 00:01:49.940 STDOUT terraform: you run "tofu init" in the future. 2025-09-10 00:01:49.940733 | orchestrator | 00:01:49.940 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-10 00:01:49.940739 | orchestrator | 00:01:49.940 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-10 00:01:49.940745 | orchestrator | 00:01:49.940 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-10 00:01:49.940751 | orchestrator | 00:01:49.940 STDOUT terraform: should now work. 2025-09-10 00:01:49.940756 | orchestrator | 00:01:49.940 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-10 00:01:49.940762 | orchestrator | 00:01:49.940 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-10 00:01:49.940772 | orchestrator | 00:01:49.940 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-10 00:01:50.049090 | orchestrator | 00:01:50.048 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-10 00:01:50.049235 | orchestrator | 00:01:50.049 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-10 00:01:50.259545 | orchestrator | 00:01:50.259 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-10 00:01:50.259621 | orchestrator | 00:01:50.259 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-10 00:01:50.259632 | orchestrator | 00:01:50.259 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-10 00:01:50.259637 | orchestrator | 00:01:50.259 STDOUT terraform: for this configuration. 2025-09-10 00:01:50.398104 | orchestrator | 00:01:50.397 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-10 00:01:50.398190 | orchestrator | 00:01:50.397 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-10 00:01:50.503250 | orchestrator | 00:01:50.503 STDOUT terraform: ci.auto.tfvars 2025-09-10 00:01:50.508472 | orchestrator | 00:01:50.508 STDOUT terraform: default_custom.tf 2025-09-10 00:01:50.640286 | orchestrator | 00:01:50.640 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-10 00:01:51.463782 | orchestrator | 00:01:51.463 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-10 00:01:52.005776 | orchestrator | 00:01:52.005 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-10 00:01:52.252224 | orchestrator | 00:01:52.251 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-10 00:01:52.252309 | orchestrator | 00:01:52.252 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-10 00:01:52.252322 | orchestrator | 00:01:52.252 STDOUT terraform:  + create 2025-09-10 00:01:52.252332 | orchestrator | 00:01:52.252 STDOUT terraform:  <= read (data resources) 2025-09-10 00:01:52.252342 | orchestrator | 00:01:52.252 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-10 00:01:52.252360 | orchestrator | 00:01:52.252 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-10 00:01:52.252369 | orchestrator | 00:01:52.252 STDOUT terraform:  # (config refers to values not yet known) 2025-09-10 00:01:52.252378 | orchestrator | 00:01:52.252 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-10 00:01:52.252386 | orchestrator | 00:01:52.252 STDOUT terraform:  + checksum = (known after apply) 2025-09-10 00:01:52.252395 | orchestrator | 00:01:52.252 STDOUT terraform:  + created_at = (known after apply) 2025-09-10 00:01:52.252403 | orchestrator | 00:01:52.252 STDOUT terraform:  + file = (known after apply) 2025-09-10 00:01:52.252411 | orchestrator | 00:01:52.252 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.252422 | orchestrator | 00:01:52.252 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.252451 | orchestrator | 00:01:52.252 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-10 00:01:52.252459 | orchestrator | 00:01:52.252 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-10 00:01:52.252472 | orchestrator | 00:01:52.252 STDOUT terraform:  + most_recent = true 2025-09-10 00:01:52.252480 | orchestrator | 00:01:52.252 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.252488 | orchestrator | 00:01:52.252 STDOUT terraform:  + protected = (known after apply) 2025-09-10 00:01:52.252499 | orchestrator | 00:01:52.252 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.252553 | orchestrator | 00:01:52.252 STDOUT terraform:  + schema = (known after apply) 2025-09-10 00:01:52.252564 | orchestrator | 00:01:52.252 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-10 00:01:52.252575 | orchestrator | 00:01:52.252 STDOUT terraform:  + tags = (known after apply) 2025-09-10 00:01:52.252638 | orchestrator | 00:01:52.252 STDOUT terraform:  + updated_at = (known after apply) 2025-09-10 00:01:52.252649 | orchestrator | 00:01:52.252 STDOUT terraform:  } 2025-09-10 00:01:52.252665 | orchestrator | 00:01:52.252 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-10 00:01:52.252676 | orchestrator | 00:01:52.252 STDOUT terraform:  # (config refers to values not yet known) 2025-09-10 00:01:52.252736 | orchestrator | 00:01:52.252 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-10 00:01:52.252749 | orchestrator | 00:01:52.252 STDOUT terraform:  + checksum = (known after apply) 2025-09-10 00:01:52.252760 | orchestrator | 00:01:52.252 STDOUT terraform:  + created_at = (known after apply) 2025-09-10 00:01:52.252808 | orchestrator | 00:01:52.252 STDOUT terraform:  + file = (known after apply) 2025-09-10 00:01:52.252820 | orchestrator | 00:01:52.252 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.252831 | orchestrator | 00:01:52.252 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.252872 | orchestrator | 00:01:52.252 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-10 00:01:52.252884 | orchestrator | 00:01:52.252 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-10 00:01:52.252929 | orchestrator | 00:01:52.252 STDOUT terraform:  + most_recent = true 2025-09-10 00:01:52.252942 | orchestrator | 00:01:52.252 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.252952 | orchestrator | 00:01:52.252 STDOUT terraform:  + protected = (known after apply) 2025-09-10 00:01:52.252984 | orchestrator | 00:01:52.252 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.253064 | orchestrator | 00:01:52.252 STDOUT terraform:  + schema = (known after apply) 2025-09-10 00:01:52.253074 | orchestrator | 00:01:52.253 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-10 00:01:52.253085 | orchestrator | 00:01:52.253 STDOUT terraform:  + tags = (known after apply) 2025-09-10 00:01:52.253116 | orchestrator | 00:01:52.253 STDOUT terraform:  + updated_at = (known after apply) 2025-09-10 00:01:52.253143 | orchestrator | 00:01:52.253 STDOUT terraform:  } 2025-09-10 00:01:52.253563 | orchestrator | 00:01:52.253 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-10 00:01:52.253608 | orchestrator | 00:01:52.253 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-10 00:01:52.253640 | orchestrator | 00:01:52.253 STDOUT terraform:  + content = (known after apply) 2025-09-10 00:01:52.253673 | orchestrator | 00:01:52.253 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-10 00:01:52.253728 | orchestrator | 00:01:52.253 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-10 00:01:52.253741 | orchestrator | 00:01:52.253 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-10 00:01:52.253800 | orchestrator | 00:01:52.253 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-10 00:01:52.253812 | orchestrator | 00:01:52.253 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-10 00:01:52.253862 | orchestrator | 00:01:52.253 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-10 00:01:52.253876 | orchestrator | 00:01:52.253 STDOUT terraform:  + directory_permission = "0777" 2025-09-10 00:01:52.253931 | orchestrator | 00:01:52.253 STDOUT terraform:  + file_permission = "0644" 2025-09-10 00:01:52.253944 | orchestrator | 00:01:52.253 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-10 00:01:52.253987 | orchestrator | 00:01:52.253 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.254074 | orchestrator | 00:01:52.253 STDOUT terraform:  } 2025-09-10 00:01:52.254083 | orchestrator | 00:01:52.253 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-10 00:01:52.254094 | orchestrator | 00:01:52.254 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-10 00:01:52.254126 | orchestrator | 00:01:52.254 STDOUT terraform:  + content = (known after apply) 2025-09-10 00:01:52.254166 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-10 00:01:52.254193 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-10 00:01:52.254236 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-10 00:01:52.254289 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-10 00:01:52.254301 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-10 00:01:52.254347 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-10 00:01:52.254377 | orchestrator | 00:01:52.254 STDOUT terraform:  + directory_permission = "0777" 2025-09-10 00:01:52.254403 | orchestrator | 00:01:52.254 STDOUT terraform:  + file_permission = "0644" 2025-09-10 00:01:52.254434 | orchestrator | 00:01:52.254 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-10 00:01:52.254463 | orchestrator | 00:01:52.254 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.254474 | orchestrator | 00:01:52.254 STDOUT terraform:  } 2025-09-10 00:01:52.254524 | orchestrator | 00:01:52.254 STDOUT terraform:  # local_file.inventory will be created 2025-09-10 00:01:52.254535 | orchestrator | 00:01:52.254 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-10 00:01:52.254566 | orchestrator | 00:01:52.254 STDOUT terraform:  + content = (known after apply) 2025-09-10 00:01:52.254595 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-10 00:01:52.254638 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-10 00:01:52.254686 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-10 00:01:52.254714 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-10 00:01:52.254748 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-10 00:01:52.254796 | orchestrator | 00:01:52.254 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-10 00:01:52.254835 | orchestrator | 00:01:52.254 STDOUT terraform:  + directory_permission = "0777" 2025-09-10 00:01:52.254881 | orchestrator | 00:01:52.254 STDOUT terraform:  + file_permission = "0644" 2025-09-10 00:01:52.254892 | orchestrator | 00:01:52.254 STDOUT terraform:  + filename = "inventory.ci" 2025-09-10 00:01:52.254928 | orchestrator | 00:01:52.254 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.254953 | orchestrator | 00:01:52.254 STDOUT terraform:  } 2025-09-10 00:01:52.254963 | orchestrator | 00:01:52.254 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-10 00:01:52.255027 | orchestrator | 00:01:52.254 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-10 00:01:52.260538 | orchestrator | 00:01:52.255 STDOUT terraform:  + content = (sensitive value) 2025-09-10 00:01:52.260590 | orchestrator | 00:01:52.258 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-10 00:01:52.260597 | orchestrator | 00:01:52.258 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-10 00:01:52.260602 | orchestrator | 00:01:52.258 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-10 00:01:52.260608 | orchestrator | 00:01:52.258 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-10 00:01:52.260614 | orchestrator | 00:01:52.258 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-10 00:01:52.260619 | orchestrator | 00:01:52.258 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-10 00:01:52.260625 | orchestrator | 00:01:52.258 STDOUT terraform:  + directory_permission = "0700" 2025-09-10 00:01:52.260631 | orchestrator | 00:01:52.258 STDOUT terraform:  + file_permission = "0600" 2025-09-10 00:01:52.260637 | orchestrator | 00:01:52.258 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-10 00:01:52.260642 | orchestrator | 00:01:52.258 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.260648 | orchestrator | 00:01:52.258 STDOUT terraform:  } 2025-09-10 00:01:52.260654 | orchestrator | 00:01:52.258 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-10 00:01:52.260660 | orchestrator | 00:01:52.258 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-10 00:01:52.260665 | orchestrator | 00:01:52.258 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.260671 | orchestrator | 00:01:52.258 STDOUT terraform:  } 2025-09-10 00:01:52.260677 | orchestrator | 00:01:52.258 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-10 00:01:52.260699 | orchestrator | 00:01:52.258 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-10 00:01:52.260705 | orchestrator | 00:01:52.258 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.260710 | orchestrator | 00:01:52.258 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.260716 | orchestrator | 00:01:52.258 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.260722 | orchestrator | 00:01:52.258 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.260727 | orchestrator | 00:01:52.258 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.260733 | orchestrator | 00:01:52.258 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-10 00:01:52.260738 | orchestrator | 00:01:52.258 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.260744 | orchestrator | 00:01:52.259 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.260749 | orchestrator | 00:01:52.259 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.260754 | orchestrator | 00:01:52.259 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.260760 | orchestrator | 00:01:52.259 STDOUT terraform:  } 2025-09-10 00:01:52.260765 | orchestrator | 00:01:52.259 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-10 00:01:52.260771 | orchestrator | 00:01:52.259 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-10 00:01:52.260776 | orchestrator | 00:01:52.259 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.260782 | orchestrator | 00:01:52.259 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.260787 | orchestrator | 00:01:52.259 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.260793 | orchestrator | 00:01:52.259 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.260798 | orchestrator | 00:01:52.259 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.260812 | orchestrator | 00:01:52.259 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-10 00:01:52.260818 | orchestrator | 00:01:52.259 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.260823 | orchestrator | 00:01:52.259 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.260829 | orchestrator | 00:01:52.259 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.260834 | orchestrator | 00:01:52.259 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.260840 | orchestrator | 00:01:52.259 STDOUT terraform:  } 2025-09-10 00:01:52.260845 | orchestrator | 00:01:52.259 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-10 00:01:52.260851 | orchestrator | 00:01:52.259 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-10 00:01:52.260856 | orchestrator | 00:01:52.259 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.260872 | orchestrator | 00:01:52.259 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.260877 | orchestrator | 00:01:52.259 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.260882 | orchestrator | 00:01:52.259 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.260888 | orchestrator | 00:01:52.259 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.260894 | orchestrator | 00:01:52.259 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-10 00:01:52.260899 | orchestrator | 00:01:52.259 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.260905 | orchestrator | 00:01:52.259 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.260910 | orchestrator | 00:01:52.259 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.260915 | orchestrator | 00:01:52.259 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.260921 | orchestrator | 00:01:52.259 STDOUT terraform:  } 2025-09-10 00:01:52.260926 | orchestrator | 00:01:52.259 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-10 00:01:52.260932 | orchestrator | 00:01:52.259 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-10 00:01:52.260937 | orchestrator | 00:01:52.260 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.260946 | orchestrator | 00:01:52.260 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.260952 | orchestrator | 00:01:52.260 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.260957 | orchestrator | 00:01:52.260 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.260963 | orchestrator | 00:01:52.260 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.260968 | orchestrator | 00:01:52.260 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-10 00:01:52.260974 | orchestrator | 00:01:52.260 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.260979 | orchestrator | 00:01:52.260 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.260984 | orchestrator | 00:01:52.260 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.260990 | orchestrator | 00:01:52.260 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.261040 | orchestrator | 00:01:52.260 STDOUT terraform:  } 2025-09-10 00:01:52.261046 | orchestrator | 00:01:52.260 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-10 00:01:52.261051 | orchestrator | 00:01:52.260 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-10 00:01:52.261057 | orchestrator | 00:01:52.260 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.261062 | orchestrator | 00:01:52.260 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.261068 | orchestrator | 00:01:52.260 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.261073 | orchestrator | 00:01:52.260 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.261084 | orchestrator | 00:01:52.260 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.261094 | orchestrator | 00:01:52.260 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-10 00:01:52.261100 | orchestrator | 00:01:52.260 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.261105 | orchestrator | 00:01:52.260 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.261110 | orchestrator | 00:01:52.260 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.261116 | orchestrator | 00:01:52.260 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.261121 | orchestrator | 00:01:52.260 STDOUT terraform:  } 2025-09-10 00:01:52.261127 | orchestrator | 00:01:52.260 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-10 00:01:52.261135 | orchestrator | 00:01:52.260 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-10 00:01:52.261141 | orchestrator | 00:01:52.261 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.261149 | orchestrator | 00:01:52.261 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.261155 | orchestrator | 00:01:52.261 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.261219 | orchestrator | 00:01:52.261 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.261228 | orchestrator | 00:01:52.261 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.261278 | orchestrator | 00:01:52.261 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-10 00:01:52.261317 | orchestrator | 00:01:52.261 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.261353 | orchestrator | 00:01:52.261 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.261361 | orchestrator | 00:01:52.261 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.261388 | orchestrator | 00:01:52.261 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.261397 | orchestrator | 00:01:52.261 STDOUT terraform:  } 2025-09-10 00:01:52.261450 | orchestrator | 00:01:52.261 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-10 00:01:52.261496 | orchestrator | 00:01:52.261 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-10 00:01:52.261531 | orchestrator | 00:01:52.261 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.261558 | orchestrator | 00:01:52.261 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.261615 | orchestrator | 00:01:52.261 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.261623 | orchestrator | 00:01:52.261 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.261667 | orchestrator | 00:01:52.261 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.261712 | orchestrator | 00:01:52.261 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-10 00:01:52.261749 | orchestrator | 00:01:52.261 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.261766 | orchestrator | 00:01:52.261 STDOUT terraform:  + size = 80 2025-09-10 00:01:52.261799 | orchestrator | 00:01:52.261 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.261820 | orchestrator | 00:01:52.261 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.261828 | orchestrator | 00:01:52.261 STDOUT terraform:  } 2025-09-10 00:01:52.261880 | orchestrator | 00:01:52.261 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-10 00:01:52.261922 | orchestrator | 00:01:52.261 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.261978 | orchestrator | 00:01:52.261 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.261986 | orchestrator | 00:01:52.261 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.262066 | orchestrator | 00:01:52.261 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.262119 | orchestrator | 00:01:52.262 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.262143 | orchestrator | 00:01:52.262 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-10 00:01:52.262181 | orchestrator | 00:01:52.262 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.262207 | orchestrator | 00:01:52.262 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.262230 | orchestrator | 00:01:52.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.262259 | orchestrator | 00:01:52.262 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.262266 | orchestrator | 00:01:52.262 STDOUT terraform:  } 2025-09-10 00:01:52.262328 | orchestrator | 00:01:52.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-10 00:01:52.262363 | orchestrator | 00:01:52.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.262402 | orchestrator | 00:01:52.262 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.262426 | orchestrator | 00:01:52.262 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.262465 | orchestrator | 00:01:52.262 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.262500 | orchestrator | 00:01:52.262 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.262544 | orchestrator | 00:01:52.262 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-10 00:01:52.262580 | orchestrator | 00:01:52.262 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.262602 | orchestrator | 00:01:52.262 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.262629 | orchestrator | 00:01:52.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.262654 | orchestrator | 00:01:52.262 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.262662 | orchestrator | 00:01:52.262 STDOUT terraform:  } 2025-09-10 00:01:52.262726 | orchestrator | 00:01:52.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-10 00:01:52.262755 | orchestrator | 00:01:52.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.262791 | orchestrator | 00:01:52.262 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.262818 | orchestrator | 00:01:52.262 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.262856 | orchestrator | 00:01:52.262 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.262894 | orchestrator | 00:01:52.262 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.262933 | orchestrator | 00:01:52.262 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-10 00:01:52.262972 | orchestrator | 00:01:52.262 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.263007 | orchestrator | 00:01:52.262 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.263040 | orchestrator | 00:01:52.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.263061 | orchestrator | 00:01:52.263 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.263081 | orchestrator | 00:01:52.263 STDOUT terraform:  } 2025-09-10 00:01:52.263123 | orchestrator | 00:01:52.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-10 00:01:52.263169 | orchestrator | 00:01:52.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.263204 | orchestrator | 00:01:52.263 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.263230 | orchestrator | 00:01:52.263 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.263273 | orchestrator | 00:01:52.263 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.263310 | orchestrator | 00:01:52.263 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.263351 | orchestrator | 00:01:52.263 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-10 00:01:52.263387 | orchestrator | 00:01:52.263 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.263427 | orchestrator | 00:01:52.263 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.263434 | orchestrator | 00:01:52.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.263457 | orchestrator | 00:01:52.263 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.263465 | orchestrator | 00:01:52.263 STDOUT terraform:  } 2025-09-10 00:01:52.263563 | orchestrator | 00:01:52.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-10 00:01:52.263607 | orchestrator | 00:01:52.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.263646 | orchestrator | 00:01:52.263 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.263670 | orchestrator | 00:01:52.263 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.263712 | orchestrator | 00:01:52.263 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.263760 | orchestrator | 00:01:52.263 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.263800 | orchestrator | 00:01:52.263 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-10 00:01:52.263836 | orchestrator | 00:01:52.263 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.263855 | orchestrator | 00:01:52.263 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.263877 | orchestrator | 00:01:52.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.263904 | orchestrator | 00:01:52.263 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.263912 | orchestrator | 00:01:52.263 STDOUT terraform:  } 2025-09-10 00:01:52.263962 | orchestrator | 00:01:52.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-10 00:01:52.264018 | orchestrator | 00:01:52.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.264055 | orchestrator | 00:01:52.264 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.264079 | orchestrator | 00:01:52.264 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.264117 | orchestrator | 00:01:52.264 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.264153 | orchestrator | 00:01:52.264 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.264195 | orchestrator | 00:01:52.264 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-10 00:01:52.264230 | orchestrator | 00:01:52.264 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.264257 | orchestrator | 00:01:52.264 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.264281 | orchestrator | 00:01:52.264 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.264307 | orchestrator | 00:01:52.264 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.264315 | orchestrator | 00:01:52.264 STDOUT terraform:  } 2025-09-10 00:01:52.264364 | orchestrator | 00:01:52.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-10 00:01:52.264409 | orchestrator | 00:01:52.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.264445 | orchestrator | 00:01:52.264 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.264473 | orchestrator | 00:01:52.264 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.264510 | orchestrator | 00:01:52.264 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.264548 | orchestrator | 00:01:52.264 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.264586 | orchestrator | 00:01:52.264 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-10 00:01:52.264626 | orchestrator | 00:01:52.264 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.264648 | orchestrator | 00:01:52.264 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.264676 | orchestrator | 00:01:52.264 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.264704 | orchestrator | 00:01:52.264 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.264712 | orchestrator | 00:01:52.264 STDOUT terraform:  } 2025-09-10 00:01:52.264769 | orchestrator | 00:01:52.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-10 00:01:52.264804 | orchestrator | 00:01:52.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.264842 | orchestrator | 00:01:52.264 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.264866 | orchestrator | 00:01:52.264 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.264905 | orchestrator | 00:01:52.264 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.264941 | orchestrator | 00:01:52.264 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.264982 | orchestrator | 00:01:52.264 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-10 00:01:52.265032 | orchestrator | 00:01:52.264 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.265051 | orchestrator | 00:01:52.265 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.265080 | orchestrator | 00:01:52.265 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.265103 | orchestrator | 00:01:52.265 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.265111 | orchestrator | 00:01:52.265 STDOUT terraform:  } 2025-09-10 00:01:52.265164 | orchestrator | 00:01:52.265 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-10 00:01:52.265207 | orchestrator | 00:01:52.265 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-10 00:01:52.265245 | orchestrator | 00:01:52.265 STDOUT terraform:  + attachment = (known after apply) 2025-09-10 00:01:52.265269 | orchestrator | 00:01:52.265 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.265308 | orchestrator | 00:01:52.265 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.265343 | orchestrator | 00:01:52.265 STDOUT terraform:  + metadata = (known after apply) 2025-09-10 00:01:52.265383 | orchestrator | 00:01:52.265 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-10 00:01:52.265421 | orchestrator | 00:01:52.265 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.265443 | orchestrator | 00:01:52.265 STDOUT terraform:  + size = 20 2025-09-10 00:01:52.265470 | orchestrator | 00:01:52.265 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-10 00:01:52.265495 | orchestrator | 00:01:52.265 STDOUT terraform:  + volume_type = "ssd" 2025-09-10 00:01:52.265502 | orchestrator | 00:01:52.265 STDOUT terraform:  } 2025-09-10 00:01:52.265555 | orchestrator | 00:01:52.265 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-10 00:01:52.265596 | orchestrator | 00:01:52.265 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-10 00:01:52.265631 | orchestrator | 00:01:52.265 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.265675 | orchestrator | 00:01:52.265 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.265711 | orchestrator | 00:01:52.265 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.265749 | orchestrator | 00:01:52.265 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.265773 | orchestrator | 00:01:52.265 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.265797 | orchestrator | 00:01:52.265 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.265833 | orchestrator | 00:01:52.265 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.265870 | orchestrator | 00:01:52.265 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.265901 | orchestrator | 00:01:52.265 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-10 00:01:52.265925 | orchestrator | 00:01:52.265 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.265961 | orchestrator | 00:01:52.265 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.266037 | orchestrator | 00:01:52.265 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.266285 | orchestrator | 00:01:52.266 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.267334 | orchestrator | 00:01:52.266 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.267344 | orchestrator | 00:01:52.267 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.267350 | orchestrator | 00:01:52.267 STDOUT terraform:  + name = "testbed-manager" 2025-09-10 00:01:52.267354 | orchestrator | 00:01:52.267 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.267359 | orchestrator | 00:01:52.267 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.267363 | orchestrator | 00:01:52.267 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.267368 | orchestrator | 00:01:52.267 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.267394 | orchestrator | 00:01:52.267 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.267400 | orchestrator | 00:01:52.267 STDOUT terraform:  + user_data = (sensitive value) 2025-09-10 00:01:52.267405 | orchestrator | 00:01:52.267 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.267409 | orchestrator | 00:01:52.267 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.267416 | orchestrator | 00:01:52.267 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.267421 | orchestrator | 00:01:52.267 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.267426 | orchestrator | 00:01:52.267 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.267430 | orchestrator | 00:01:52.267 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.267485 | orchestrator | 00:01:52.267 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.267492 | orchestrator | 00:01:52.267 STDOUT terraform:  } 2025-09-10 00:01:52.267498 | orchestrator | 00:01:52.267 STDOUT terraform:  + network { 2025-09-10 00:01:52.267539 | orchestrator | 00:01:52.267 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.267547 | orchestrator | 00:01:52.267 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.267596 | orchestrator | 00:01:52.267 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.267635 | orchestrator | 00:01:52.267 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.267653 | orchestrator | 00:01:52.267 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.267690 | orchestrator | 00:01:52.267 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.267725 | orchestrator | 00:01:52.267 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.267733 | orchestrator | 00:01:52.267 STDOUT terraform:  } 2025-09-10 00:01:52.267741 | orchestrator | 00:01:52.267 STDOUT terraform:  } 2025-09-10 00:01:52.267795 | orchestrator | 00:01:52.267 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-10 00:01:52.267860 | orchestrator | 00:01:52.267 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-10 00:01:52.267868 | orchestrator | 00:01:52.267 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.268084 | orchestrator | 00:01:52.267 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.268093 | orchestrator | 00:01:52.267 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.268098 | orchestrator | 00:01:52.267 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.268113 | orchestrator | 00:01:52.267 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.268118 | orchestrator | 00:01:52.268 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.268125 | orchestrator | 00:01:52.268 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.268129 | orchestrator | 00:01:52.268 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.268158 | orchestrator | 00:01:52.268 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-10 00:01:52.268194 | orchestrator | 00:01:52.268 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.268331 | orchestrator | 00:01:52.268 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.268337 | orchestrator | 00:01:52.268 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.268341 | orchestrator | 00:01:52.268 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.268345 | orchestrator | 00:01:52.268 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.268351 | orchestrator | 00:01:52.268 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.268399 | orchestrator | 00:01:52.268 STDOUT terraform:  + name = "testbed-node-0" 2025-09-10 00:01:52.268406 | orchestrator | 00:01:52.268 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.268462 | orchestrator | 00:01:52.268 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.268472 | orchestrator | 00:01:52.268 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.268512 | orchestrator | 00:01:52.268 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.268604 | orchestrator | 00:01:52.268 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.268612 | orchestrator | 00:01:52.268 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-10 00:01:52.268648 | orchestrator | 00:01:52.268 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.268665 | orchestrator | 00:01:52.268 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.268706 | orchestrator | 00:01:52.268 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.268714 | orchestrator | 00:01:52.268 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.268752 | orchestrator | 00:01:52.268 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.268793 | orchestrator | 00:01:52.268 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.268838 | orchestrator | 00:01:52.268 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.268847 | orchestrator | 00:01:52.268 STDOUT terraform:  } 2025-09-10 00:01:52.268853 | orchestrator | 00:01:52.268 STDOUT terraform:  + network { 2025-09-10 00:01:52.268858 | orchestrator | 00:01:52.268 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.268955 | orchestrator | 00:01:52.268 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.268961 | orchestrator | 00:01:52.268 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.268967 | orchestrator | 00:01:52.268 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.269043 | orchestrator | 00:01:52.268 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.269051 | orchestrator | 00:01:52.268 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.269088 | orchestrator | 00:01:52.269 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.269096 | orchestrator | 00:01:52.269 STDOUT terraform:  } 2025-09-10 00:01:52.269102 | orchestrator | 00:01:52.269 STDOUT terraform:  } 2025-09-10 00:01:52.269172 | orchestrator | 00:01:52.269 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-10 00:01:52.269183 | orchestrator | 00:01:52.269 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-10 00:01:52.269250 | orchestrator | 00:01:52.269 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.269261 | orchestrator | 00:01:52.269 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.269311 | orchestrator | 00:01:52.269 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.269337 | orchestrator | 00:01:52.269 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.269367 | orchestrator | 00:01:52.269 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.269410 | orchestrator | 00:01:52.269 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.269420 | orchestrator | 00:01:52.269 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.269493 | orchestrator | 00:01:52.269 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.269502 | orchestrator | 00:01:52.269 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-10 00:01:52.269508 | orchestrator | 00:01:52.269 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.269570 | orchestrator | 00:01:52.269 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.269582 | orchestrator | 00:01:52.269 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.269657 | orchestrator | 00:01:52.269 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.269666 | orchestrator | 00:01:52.269 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.269672 | orchestrator | 00:01:52.269 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.269724 | orchestrator | 00:01:52.269 STDOUT terraform:  + name = "testbed-node-1" 2025-09-10 00:01:52.269735 | orchestrator | 00:01:52.269 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.269799 | orchestrator | 00:01:52.269 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.269808 | orchestrator | 00:01:52.269 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.269826 | orchestrator | 00:01:52.269 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.269868 | orchestrator | 00:01:52.269 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.269953 | orchestrator | 00:01:52.269 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-10 00:01:52.269963 | orchestrator | 00:01:52.269 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.269969 | orchestrator | 00:01:52.269 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.270004 | orchestrator | 00:01:52.269 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.270082 | orchestrator | 00:01:52.269 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.270091 | orchestrator | 00:01:52.270 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.270121 | orchestrator | 00:01:52.270 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.270163 | orchestrator | 00:01:52.270 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.270174 | orchestrator | 00:01:52.270 STDOUT terraform:  } 2025-09-10 00:01:52.270179 | orchestrator | 00:01:52.270 STDOUT terraform:  + network { 2025-09-10 00:01:52.270207 | orchestrator | 00:01:52.270 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.270255 | orchestrator | 00:01:52.270 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.270273 | orchestrator | 00:01:52.270 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.270336 | orchestrator | 00:01:52.270 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.270345 | orchestrator | 00:01:52.270 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.270383 | orchestrator | 00:01:52.270 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.270394 | orchestrator | 00:01:52.270 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.270435 | orchestrator | 00:01:52.270 STDOUT terraform:  } 2025-09-10 00:01:52.270443 | orchestrator | 00:01:52.270 STDOUT terraform:  } 2025-09-10 00:01:52.270502 | orchestrator | 00:01:52.270 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-10 00:01:52.270513 | orchestrator | 00:01:52.270 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-10 00:01:52.270570 | orchestrator | 00:01:52.270 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.270627 | orchestrator | 00:01:52.270 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.270635 | orchestrator | 00:01:52.270 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.270665 | orchestrator | 00:01:52.270 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.270704 | orchestrator | 00:01:52.270 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.270715 | orchestrator | 00:01:52.270 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.270774 | orchestrator | 00:01:52.270 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.270783 | orchestrator | 00:01:52.270 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.270810 | orchestrator | 00:01:52.270 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-10 00:01:52.270851 | orchestrator | 00:01:52.270 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.270884 | orchestrator | 00:01:52.270 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.270916 | orchestrator | 00:01:52.270 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.270950 | orchestrator | 00:01:52.270 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.271021 | orchestrator | 00:01:52.270 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.271094 | orchestrator | 00:01:52.270 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.271129 | orchestrator | 00:01:52.271 STDOUT terraform:  + name = "testbed-node-2" 2025-09-10 00:01:52.271140 | orchestrator | 00:01:52.271 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.271191 | orchestrator | 00:01:52.271 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.271223 | orchestrator | 00:01:52.271 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.271251 | orchestrator | 00:01:52.271 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.271288 | orchestrator | 00:01:52.271 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.271341 | orchestrator | 00:01:52.271 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-10 00:01:52.271351 | orchestrator | 00:01:52.271 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.271390 | orchestrator | 00:01:52.271 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.271442 | orchestrator | 00:01:52.271 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.271452 | orchestrator | 00:01:52.271 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.271470 | orchestrator | 00:01:52.271 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.271503 | orchestrator | 00:01:52.271 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.271583 | orchestrator | 00:01:52.271 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.271597 | orchestrator | 00:01:52.271 STDOUT terraform:  } 2025-09-10 00:01:52.271601 | orchestrator | 00:01:52.271 STDOUT terraform:  + network { 2025-09-10 00:01:52.271606 | orchestrator | 00:01:52.271 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.271632 | orchestrator | 00:01:52.271 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.271661 | orchestrator | 00:01:52.271 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.271714 | orchestrator | 00:01:52.271 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.271724 | orchestrator | 00:01:52.271 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.271794 | orchestrator | 00:01:52.271 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.271803 | orchestrator | 00:01:52.271 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.271807 | orchestrator | 00:01:52.271 STDOUT terraform:  } 2025-09-10 00:01:52.271813 | orchestrator | 00:01:52.271 STDOUT terraform:  } 2025-09-10 00:01:52.271864 | orchestrator | 00:01:52.271 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-10 00:01:52.271906 | orchestrator | 00:01:52.271 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-10 00:01:52.271944 | orchestrator | 00:01:52.271 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.272004 | orchestrator | 00:01:52.271 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.272045 | orchestrator | 00:01:52.271 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.272076 | orchestrator | 00:01:52.272 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.272085 | orchestrator | 00:01:52.272 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.272113 | orchestrator | 00:01:52.272 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.272149 | orchestrator | 00:01:52.272 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.272197 | orchestrator | 00:01:52.272 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.272253 | orchestrator | 00:01:52.272 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-10 00:01:52.272262 | orchestrator | 00:01:52.272 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.272288 | orchestrator | 00:01:52.272 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.272370 | orchestrator | 00:01:52.272 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.272378 | orchestrator | 00:01:52.272 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.272403 | orchestrator | 00:01:52.272 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.272458 | orchestrator | 00:01:52.272 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.272466 | orchestrator | 00:01:52.272 STDOUT terraform:  + name = "testbed-node-3" 2025-09-10 00:01:52.272471 | orchestrator | 00:01:52.272 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.272527 | orchestrator | 00:01:52.272 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.272606 | orchestrator | 00:01:52.272 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.272612 | orchestrator | 00:01:52.272 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.272617 | orchestrator | 00:01:52.272 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.272696 | orchestrator | 00:01:52.272 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-10 00:01:52.272705 | orchestrator | 00:01:52.272 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.272710 | orchestrator | 00:01:52.272 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.272762 | orchestrator | 00:01:52.272 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.272767 | orchestrator | 00:01:52.272 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.272793 | orchestrator | 00:01:52.272 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.272821 | orchestrator | 00:01:52.272 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.272896 | orchestrator | 00:01:52.272 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.272905 | orchestrator | 00:01:52.272 STDOUT terraform:  } 2025-09-10 00:01:52.272909 | orchestrator | 00:01:52.272 STDOUT terraform:  + network { 2025-09-10 00:01:52.272914 | orchestrator | 00:01:52.272 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.272939 | orchestrator | 00:01:52.272 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.272968 | orchestrator | 00:01:52.272 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.273033 | orchestrator | 00:01:52.272 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.273061 | orchestrator | 00:01:52.273 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.273100 | orchestrator | 00:01:52.273 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.273137 | orchestrator | 00:01:52.273 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.273146 | orchestrator | 00:01:52.273 STDOUT terraform:  } 2025-09-10 00:01:52.273151 | orchestrator | 00:01:52.273 STDOUT terraform:  } 2025-09-10 00:01:52.273204 | orchestrator | 00:01:52.273 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-10 00:01:52.273248 | orchestrator | 00:01:52.273 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-10 00:01:52.273288 | orchestrator | 00:01:52.273 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.273344 | orchestrator | 00:01:52.273 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.273354 | orchestrator | 00:01:52.273 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.273401 | orchestrator | 00:01:52.273 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.273452 | orchestrator | 00:01:52.273 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.273461 | orchestrator | 00:01:52.273 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.273485 | orchestrator | 00:01:52.273 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.273520 | orchestrator | 00:01:52.273 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.273578 | orchestrator | 00:01:52.273 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-10 00:01:52.273586 | orchestrator | 00:01:52.273 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.273629 | orchestrator | 00:01:52.273 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.273669 | orchestrator | 00:01:52.273 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.273678 | orchestrator | 00:01:52.273 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.273749 | orchestrator | 00:01:52.273 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.273756 | orchestrator | 00:01:52.273 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.273835 | orchestrator | 00:01:52.273 STDOUT terraform:  + name = "testbed-node-4" 2025-09-10 00:01:52.273843 | orchestrator | 00:01:52.273 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.273847 | orchestrator | 00:01:52.273 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.273890 | orchestrator | 00:01:52.273 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.273899 | orchestrator | 00:01:52.273 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.273949 | orchestrator | 00:01:52.273 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.274031 | orchestrator | 00:01:52.273 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-10 00:01:52.274133 | orchestrator | 00:01:52.273 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.274504 | orchestrator | 00:01:52.274 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.274573 | orchestrator | 00:01:52.274 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.274599 | orchestrator | 00:01:52.274 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.274628 | orchestrator | 00:01:52.274 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.274658 | orchestrator | 00:01:52.274 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.274705 | orchestrator | 00:01:52.274 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.274712 | orchestrator | 00:01:52.274 STDOUT terraform:  } 2025-09-10 00:01:52.274718 | orchestrator | 00:01:52.274 STDOUT terraform:  + network { 2025-09-10 00:01:52.274749 | orchestrator | 00:01:52.274 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.274782 | orchestrator | 00:01:52.274 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.274816 | orchestrator | 00:01:52.274 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.274849 | orchestrator | 00:01:52.274 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.274882 | orchestrator | 00:01:52.274 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.274911 | orchestrator | 00:01:52.274 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.274941 | orchestrator | 00:01:52.274 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.274947 | orchestrator | 00:01:52.274 STDOUT terraform:  } 2025-09-10 00:01:52.274963 | orchestrator | 00:01:52.274 STDOUT terraform:  } 2025-09-10 00:01:52.275026 | orchestrator | 00:01:52.274 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-10 00:01:52.275059 | orchestrator | 00:01:52.275 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-10 00:01:52.275092 | orchestrator | 00:01:52.275 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-10 00:01:52.275128 | orchestrator | 00:01:52.275 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-10 00:01:52.275162 | orchestrator | 00:01:52.275 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-10 00:01:52.275212 | orchestrator | 00:01:52.275 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.275238 | orchestrator | 00:01:52.275 STDOUT terraform:  + availability_zone = "nova" 2025-09-10 00:01:52.275261 | orchestrator | 00:01:52.275 STDOUT terraform:  + config_drive = true 2025-09-10 00:01:52.275297 | orchestrator | 00:01:52.275 STDOUT terraform:  + created = (known after apply) 2025-09-10 00:01:52.275340 | orchestrator | 00:01:52.275 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-10 00:01:52.275370 | orchestrator | 00:01:52.275 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-10 00:01:52.275406 | orchestrator | 00:01:52.275 STDOUT terraform:  + force_delete = false 2025-09-10 00:01:52.275450 | orchestrator | 00:01:52.275 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-10 00:01:52.275499 | orchestrator | 00:01:52.275 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.275532 | orchestrator | 00:01:52.275 STDOUT terraform:  + image_id = (known after apply) 2025-09-10 00:01:52.275566 | orchestrator | 00:01:52.275 STDOUT terraform:  + image_name = (known after apply) 2025-09-10 00:01:52.275591 | orchestrator | 00:01:52.275 STDOUT terraform:  + key_pair = "testbed" 2025-09-10 00:01:52.275622 | orchestrator | 00:01:52.275 STDOUT terraform:  + name = "testbed-node-5" 2025-09-10 00:01:52.275646 | orchestrator | 00:01:52.275 STDOUT terraform:  + power_state = "active" 2025-09-10 00:01:52.275679 | orchestrator | 00:01:52.275 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.275714 | orchestrator | 00:01:52.275 STDOUT terraform:  + security_groups = (known after apply) 2025-09-10 00:01:52.275738 | orchestrator | 00:01:52.275 STDOUT terraform:  + stop_before_destroy = false 2025-09-10 00:01:52.275772 | orchestrator | 00:01:52.275 STDOUT terraform:  + updated = (known after apply) 2025-09-10 00:01:52.275822 | orchestrator | 00:01:52.275 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-10 00:01:52.275844 | orchestrator | 00:01:52.275 STDOUT terraform:  + block_device { 2025-09-10 00:01:52.275856 | orchestrator | 00:01:52.275 STDOUT terraform:  + boot_index = 0 2025-09-10 00:01:52.275891 | orchestrator | 00:01:52.275 STDOUT terraform:  + delete_on_termination = false 2025-09-10 00:01:52.275920 | orchestrator | 00:01:52.275 STDOUT terraform:  + destination_type = "volume" 2025-09-10 00:01:52.275949 | orchestrator | 00:01:52.275 STDOUT terraform:  + multiattach = false 2025-09-10 00:01:52.275982 | orchestrator | 00:01:52.275 STDOUT terraform:  + source_type = "volume" 2025-09-10 00:01:52.276034 | orchestrator | 00:01:52.275 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.276040 | orchestrator | 00:01:52.276 STDOUT terraform:  } 2025-09-10 00:01:52.276063 | orchestrator | 00:01:52.276 STDOUT terraform:  + network { 2025-09-10 00:01:52.276079 | orchestrator | 00:01:52.276 STDOUT terraform:  + access_network = false 2025-09-10 00:01:52.276111 | orchestrator | 00:01:52.276 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-10 00:01:52.276144 | orchestrator | 00:01:52.276 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-10 00:01:52.276177 | orchestrator | 00:01:52.276 STDOUT terraform:  + mac = (known after apply) 2025-09-10 00:01:52.276209 | orchestrator | 00:01:52.276 STDOUT terraform:  + name = (known after apply) 2025-09-10 00:01:52.276245 | orchestrator | 00:01:52.276 STDOUT terraform:  + port = (known after apply) 2025-09-10 00:01:52.276272 | orchestrator | 00:01:52.276 STDOUT terraform:  + uuid = (known after apply) 2025-09-10 00:01:52.276306 | orchestrator | 00:01:52.276 STDOUT terraform:  } 2025-09-10 00:01:52.276312 | orchestrator | 00:01:52.276 STDOUT terraform:  } 2025-09-10 00:01:52.276357 | orchestrator | 00:01:52.276 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-10 00:01:52.276395 | orchestrator | 00:01:52.276 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-10 00:01:52.276425 | orchestrator | 00:01:52.276 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-10 00:01:52.276456 | orchestrator | 00:01:52.276 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.276480 | orchestrator | 00:01:52.276 STDOUT terraform:  + name = "testbed" 2025-09-10 00:01:52.276504 | orchestrator | 00:01:52.276 STDOUT terraform:  + private_key = (sensitive value) 2025-09-10 00:01:52.276534 | orchestrator | 00:01:52.276 STDOUT terraform:  + public_key = (known after apply) 2025-09-10 00:01:52.276563 | orchestrator | 00:01:52.276 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.276593 | orchestrator | 00:01:52.276 STDOUT terraform:  + user_id = (known after apply) 2025-09-10 00:01:52.276600 | orchestrator | 00:01:52.276 STDOUT terraform:  } 2025-09-10 00:01:52.276650 | orchestrator | 00:01:52.276 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-10 00:01:52.276699 | orchestrator | 00:01:52.276 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.276727 | orchestrator | 00:01:52.276 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.276755 | orchestrator | 00:01:52.276 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.276780 | orchestrator | 00:01:52.276 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.276809 | orchestrator | 00:01:52.276 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.276835 | orchestrator | 00:01:52.276 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.276841 | orchestrator | 00:01:52.276 STDOUT terraform:  } 2025-09-10 00:01:52.276893 | orchestrator | 00:01:52.276 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-10 00:01:52.276941 | orchestrator | 00:01:52.276 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.276969 | orchestrator | 00:01:52.276 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.277011 | orchestrator | 00:01:52.276 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.277060 | orchestrator | 00:01:52.277 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.277087 | orchestrator | 00:01:52.277 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.277114 | orchestrator | 00:01:52.277 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.277120 | orchestrator | 00:01:52.277 STDOUT terraform:  } 2025-09-10 00:01:52.277173 | orchestrator | 00:01:52.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-10 00:01:52.277220 | orchestrator | 00:01:52.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.277251 | orchestrator | 00:01:52.277 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.277281 | orchestrator | 00:01:52.277 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.277309 | orchestrator | 00:01:52.277 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.277338 | orchestrator | 00:01:52.277 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.277366 | orchestrator | 00:01:52.277 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.277372 | orchestrator | 00:01:52.277 STDOUT terraform:  } 2025-09-10 00:01:52.277426 | orchestrator | 00:01:52.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-10 00:01:52.277473 | orchestrator | 00:01:52.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.277500 | orchestrator | 00:01:52.277 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.277528 | orchestrator | 00:01:52.277 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.277556 | orchestrator | 00:01:52.277 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.277586 | orchestrator | 00:01:52.277 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.277617 | orchestrator | 00:01:52.277 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.277623 | orchestrator | 00:01:52.277 STDOUT terraform:  } 2025-09-10 00:01:52.277673 | orchestrator | 00:01:52.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-10 00:01:52.277720 | orchestrator | 00:01:52.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.277748 | orchestrator | 00:01:52.277 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.277776 | orchestrator | 00:01:52.277 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.277804 | orchestrator | 00:01:52.277 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.277832 | orchestrator | 00:01:52.277 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.277862 | orchestrator | 00:01:52.277 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.277868 | orchestrator | 00:01:52.277 STDOUT terraform:  } 2025-09-10 00:01:52.277918 | orchestrator | 00:01:52.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-10 00:01:52.277965 | orchestrator | 00:01:52.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.278034 | orchestrator | 00:01:52.277 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.278114 | orchestrator | 00:01:52.277 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.278159 | orchestrator | 00:01:52.278 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.278239 | orchestrator | 00:01:52.278 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.278303 | orchestrator | 00:01:52.278 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.278333 | orchestrator | 00:01:52.278 STDOUT terraform:  } 2025-09-10 00:01:52.278439 | orchestrator | 00:01:52.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-10 00:01:52.278487 | orchestrator | 00:01:52.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.278516 | orchestrator | 00:01:52.278 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.278546 | orchestrator | 00:01:52.278 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.278574 | orchestrator | 00:01:52.278 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.278600 | orchestrator | 00:01:52.278 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.278630 | orchestrator | 00:01:52.278 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.278636 | orchestrator | 00:01:52.278 STDOUT terraform:  } 2025-09-10 00:01:52.278687 | orchestrator | 00:01:52.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-10 00:01:52.278735 | orchestrator | 00:01:52.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.278763 | orchestrator | 00:01:52.278 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.278790 | orchestrator | 00:01:52.278 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.278818 | orchestrator | 00:01:52.278 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.278845 | orchestrator | 00:01:52.278 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.278872 | orchestrator | 00:01:52.278 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.278878 | orchestrator | 00:01:52.278 STDOUT terraform:  } 2025-09-10 00:01:52.278932 | orchestrator | 00:01:52.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-10 00:01:52.278980 | orchestrator | 00:01:52.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-10 00:01:52.279023 | orchestrator | 00:01:52.278 STDOUT terraform:  + device = (known after apply) 2025-09-10 00:01:52.279049 | orchestrator | 00:01:52.279 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.279076 | orchestrator | 00:01:52.279 STDOUT terraform:  + instance_id = (known after apply) 2025-09-10 00:01:52.279104 | orchestrator | 00:01:52.279 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.279132 | orchestrator | 00:01:52.279 STDOUT terraform:  + volume_id = (known after apply) 2025-09-10 00:01:52.279138 | orchestrator | 00:01:52.279 STDOUT terraform:  } 2025-09-10 00:01:52.279216 | orchestrator | 00:01:52.279 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-10 00:01:52.279259 | orchestrator | 00:01:52.279 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-10 00:01:52.279286 | orchestrator | 00:01:52.279 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-10 00:01:52.279314 | orchestrator | 00:01:52.279 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-10 00:01:52.279344 | orchestrator | 00:01:52.279 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.279372 | orchestrator | 00:01:52.279 STDOUT terraform:  + port_id = (known after apply) 2025-09-10 00:01:52.279400 | orchestrator | 00:01:52.279 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.279406 | orchestrator | 00:01:52.279 STDOUT terraform:  } 2025-09-10 00:01:52.279455 | orchestrator | 00:01:52.279 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-10 00:01:52.279501 | orchestrator | 00:01:52.279 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-10 00:01:52.279526 | orchestrator | 00:01:52.279 STDOUT terraform:  + address = (known after apply) 2025-09-10 00:01:52.279551 | orchestrator | 00:01:52.279 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.279574 | orchestrator | 00:01:52.279 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-10 00:01:52.279599 | orchestrator | 00:01:52.279 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.279626 | orchestrator | 00:01:52.279 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-10 00:01:52.279650 | orchestrator | 00:01:52.279 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.279666 | orchestrator | 00:01:52.279 STDOUT terraform:  + pool = "public" 2025-09-10 00:01:52.279691 | orchestrator | 00:01:52.279 STDOUT terraform:  + port_id = (known after apply) 2025-09-10 00:01:52.279716 | orchestrator | 00:01:52.279 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.279740 | orchestrator | 00:01:52.279 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.279761 | orchestrator | 00:01:52.279 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.279767 | orchestrator | 00:01:52.279 STDOUT terraform:  } 2025-09-10 00:01:52.279812 | orchestrator | 00:01:52.279 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-10 00:01:52.279855 | orchestrator | 00:01:52.279 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-10 00:01:52.279891 | orchestrator | 00:01:52.279 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.279927 | orchestrator | 00:01:52.279 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.279950 | orchestrator | 00:01:52.279 STDOUT terraform:  + availability_zone_hints = [ 2025-09-10 00:01:52.279956 | orchestrator | 00:01:52.279 STDOUT terraform:  + "nova", 2025-09-10 00:01:52.279971 | orchestrator | 00:01:52.279 STDOUT terraform:  ] 2025-09-10 00:01:52.280016 | orchestrator | 00:01:52.279 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-10 00:01:52.280052 | orchestrator | 00:01:52.280 STDOUT terraform:  + external = (known after apply) 2025-09-10 00:01:52.280088 | orchestrator | 00:01:52.280 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.280124 | orchestrator | 00:01:52.280 STDOUT terraform:  + mtu = (known after apply) 2025-09-10 00:01:52.280162 | orchestrator | 00:01:52.280 STDOUT terraform:  + name = "net-testbed-management" 2025-09-10 00:01:52.280197 | orchestrator | 00:01:52.280 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.280232 | orchestrator | 00:01:52.280 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.280268 | orchestrator | 00:01:52.280 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.280308 | orchestrator | 00:01:52.280 STDOUT terraform:  + shared = (known after apply) 2025-09-10 00:01:52.280342 | orchestrator | 00:01:52.280 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.280377 | orchestrator | 00:01:52.280 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-10 00:01:52.280400 | orchestrator | 00:01:52.280 STDOUT terraform:  + segments (known after apply) 2025-09-10 00:01:52.280407 | orchestrator | 00:01:52.280 STDOUT terraform:  } 2025-09-10 00:01:52.280456 | orchestrator | 00:01:52.280 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-10 00:01:52.280501 | orchestrator | 00:01:52.280 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-10 00:01:52.280536 | orchestrator | 00:01:52.280 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.280573 | orchestrator | 00:01:52.280 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.280607 | orchestrator | 00:01:52.280 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.280643 | orchestrator | 00:01:52.280 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.280678 | orchestrator | 00:01:52.280 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.280712 | orchestrator | 00:01:52.280 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.280748 | orchestrator | 00:01:52.280 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.280782 | orchestrator | 00:01:52.280 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.280819 | orchestrator | 00:01:52.280 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.280854 | orchestrator | 00:01:52.280 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.280889 | orchestrator | 00:01:52.280 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.280923 | orchestrator | 00:01:52.280 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.280959 | orchestrator | 00:01:52.280 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.281003 | orchestrator | 00:01:52.280 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.281053 | orchestrator | 00:01:52.280 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.281088 | orchestrator | 00:01:52.281 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.281112 | orchestrator | 00:01:52.281 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.281143 | orchestrator | 00:01:52.281 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.281149 | orchestrator | 00:01:52.281 STDOUT terraform:  } 2025-09-10 00:01:52.281175 | orchestrator | 00:01:52.281 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.281203 | orchestrator | 00:01:52.281 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.281209 | orchestrator | 00:01:52.281 STDOUT terraform:  } 2025-09-10 00:01:52.281235 | orchestrator | 00:01:52.281 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.281241 | orchestrator | 00:01:52.281 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.281273 | orchestrator | 00:01:52.281 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-10 00:01:52.281301 | orchestrator | 00:01:52.281 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.281308 | orchestrator | 00:01:52.281 STDOUT terraform:  } 2025-09-10 00:01:52.281323 | orchestrator | 00:01:52.281 STDOUT terraform:  } 2025-09-10 00:01:52.281413 | orchestrator | 00:01:52.281 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-10 00:01:52.281459 | orchestrator | 00:01:52.281 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-10 00:01:52.281496 | orchestrator | 00:01:52.281 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.281532 | orchestrator | 00:01:52.281 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.281567 | orchestrator | 00:01:52.281 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.281602 | orchestrator | 00:01:52.281 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.281639 | orchestrator | 00:01:52.281 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.281676 | orchestrator | 00:01:52.281 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.281711 | orchestrator | 00:01:52.281 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.281746 | orchestrator | 00:01:52.281 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.281783 | orchestrator | 00:01:52.281 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.281822 | orchestrator | 00:01:52.281 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.281855 | orchestrator | 00:01:52.281 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.281889 | orchestrator | 00:01:52.281 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.281924 | orchestrator | 00:01:52.281 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.281962 | orchestrator | 00:01:52.281 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.282033 | orchestrator | 00:01:52.281 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.282063 | orchestrator | 00:01:52.281 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.282091 | orchestrator | 00:01:52.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.282287 | orchestrator | 00:01:52.282 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.282293 | orchestrator | 00:01:52.282 STDOUT terraform:  } 2025-09-10 00:01:52.282300 | orchestrator | 00:01:52.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.282361 | orchestrator | 00:01:52.282 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-10 00:01:52.282419 | orchestrator | 00:01:52.282 STDOUT terraform:  } 2025-09-10 00:01:52.282559 | orchestrator | 00:01:52.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.282727 | orchestrator | 00:01:52.282 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.282758 | orchestrator | 00:01:52.282 STDOUT terraform:  } 2025-09-10 00:01:52.282762 | orchestrator | 00:01:52.282 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.282877 | orchestrator | 00:01:52.282 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-10 00:01:52.282919 | orchestrator | 00:01:52.282 STDOUT terraform:  } 2025-09-10 00:01:52.283101 | orchestrator | 00:01:52.282 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.286072 | orchestrator | 00:01:52.283 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.286085 | orchestrator | 00:01:52.283 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-10 00:01:52.286091 | orchestrator | 00:01:52.283 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.286098 | orchestrator | 00:01:52.283 STDOUT terraform:  } 2025-09-10 00:01:52.286104 | orchestrator | 00:01:52.283 STDOUT terraform:  } 2025-09-10 00:01:52.286110 | orchestrator | 00:01:52.283 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-10 00:01:52.286117 | orchestrator | 00:01:52.283 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-10 00:01:52.286130 | orchestrator | 00:01:52.283 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.286137 | orchestrator | 00:01:52.283 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.286143 | orchestrator | 00:01:52.283 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.286154 | orchestrator | 00:01:52.283 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.286161 | orchestrator | 00:01:52.283 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.286167 | orchestrator | 00:01:52.283 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.286174 | orchestrator | 00:01:52.283 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.286180 | orchestrator | 00:01:52.283 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.286187 | orchestrator | 00:01:52.283 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.286193 | orchestrator | 00:01:52.283 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.286199 | orchestrator | 00:01:52.283 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.286205 | orchestrator | 00:01:52.283 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.286211 | orchestrator | 00:01:52.283 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.286217 | orchestrator | 00:01:52.283 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.286224 | orchestrator | 00:01:52.283 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.286230 | orchestrator | 00:01:52.283 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.286236 | orchestrator | 00:01:52.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286242 | orchestrator | 00:01:52.283 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.286248 | orchestrator | 00:01:52.283 STDOUT terraform:  } 2025-09-10 00:01:52.286255 | orchestrator | 00:01:52.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286261 | orchestrator | 00:01:52.283 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-10 00:01:52.286267 | orchestrator | 00:01:52.283 STDOUT terraform:  } 2025-09-10 00:01:52.286274 | orchestrator | 00:01:52.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286280 | orchestrator | 00:01:52.283 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.286286 | orchestrator | 00:01:52.283 STDOUT terraform:  } 2025-09-10 00:01:52.286293 | orchestrator | 00:01:52.283 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286299 | orchestrator | 00:01:52.283 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-10 00:01:52.286306 | orchestrator | 00:01:52.283 STDOUT terraform:  } 2025-09-10 00:01:52.286312 | orchestrator | 00:01:52.283 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.286318 | orchestrator | 00:01:52.283 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.286329 | orchestrator | 00:01:52.284 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-10 00:01:52.286335 | orchestrator | 00:01:52.284 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.286349 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286356 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286362 | orchestrator | 00:01:52.284 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-10 00:01:52.286369 | orchestrator | 00:01:52.284 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-10 00:01:52.286375 | orchestrator | 00:01:52.284 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.286381 | orchestrator | 00:01:52.284 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.286387 | orchestrator | 00:01:52.284 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.286393 | orchestrator | 00:01:52.284 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.286399 | orchestrator | 00:01:52.284 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.286406 | orchestrator | 00:01:52.284 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.286412 | orchestrator | 00:01:52.284 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.286419 | orchestrator | 00:01:52.284 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.286425 | orchestrator | 00:01:52.284 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.286431 | orchestrator | 00:01:52.284 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.286437 | orchestrator | 00:01:52.284 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.286444 | orchestrator | 00:01:52.284 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.286450 | orchestrator | 00:01:52.284 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.286456 | orchestrator | 00:01:52.284 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.286462 | orchestrator | 00:01:52.284 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.286468 | orchestrator | 00:01:52.284 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.286474 | orchestrator | 00:01:52.284 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286480 | orchestrator | 00:01:52.284 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.286487 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286493 | orchestrator | 00:01:52.284 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286499 | orchestrator | 00:01:52.284 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-10 00:01:52.286506 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286512 | orchestrator | 00:01:52.284 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286519 | orchestrator | 00:01:52.284 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.286530 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286536 | orchestrator | 00:01:52.284 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286543 | orchestrator | 00:01:52.284 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-10 00:01:52.286549 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286555 | orchestrator | 00:01:52.284 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.286561 | orchestrator | 00:01:52.284 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.286567 | orchestrator | 00:01:52.284 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-10 00:01:52.286573 | orchestrator | 00:01:52.284 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.286579 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286586 | orchestrator | 00:01:52.284 STDOUT terraform:  } 2025-09-10 00:01:52.286593 | orchestrator | 00:01:52.284 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-10 00:01:52.286603 | orchestrator | 00:01:52.285 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-10 00:01:52.286610 | orchestrator | 00:01:52.285 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.286617 | orchestrator | 00:01:52.285 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.286623 | orchestrator | 00:01:52.285 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.286629 | orchestrator | 00:01:52.285 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.286635 | orchestrator | 00:01:52.285 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.286641 | orchestrator | 00:01:52.285 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.286651 | orchestrator | 00:01:52.285 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.286658 | orchestrator | 00:01:52.285 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.286681 | orchestrator | 00:01:52.285 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.286688 | orchestrator | 00:01:52.285 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.286695 | orchestrator | 00:01:52.285 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.286701 | orchestrator | 00:01:52.285 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.286707 | orchestrator | 00:01:52.285 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.286713 | orchestrator | 00:01:52.285 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.286719 | orchestrator | 00:01:52.285 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.286725 | orchestrator | 00:01:52.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.286732 | orchestrator | 00:01:52.285 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286738 | orchestrator | 00:01:52.285 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.286749 | orchestrator | 00:01:52.285 STDOUT terraform:  } 2025-09-10 00:01:52.286756 | orchestrator | 00:01:52.285 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286763 | orchestrator | 00:01:52.285 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-10 00:01:52.286770 | orchestrator | 00:01:52.285 STDOUT terraform:  } 2025-09-10 00:01:52.286776 | orchestrator | 00:01:52.285 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286783 | orchestrator | 00:01:52.285 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.286790 | orchestrator | 00:01:52.285 STDOUT terraform:  } 2025-09-10 00:01:52.286796 | orchestrator | 00:01:52.285 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.286803 | orchestrator | 00:01:52.285 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-10 00:01:52.286810 | orchestrator | 00:01:52.285 STDOUT terraform:  } 2025-09-10 00:01:52.286817 | orchestrator | 00:01:52.285 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.286823 | orchestrator | 00:01:52.285 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.286830 | orchestrator | 00:01:52.285 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-10 00:01:52.286836 | orchestrator | 00:01:52.285 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.286844 | orchestrator | 00:01:52.285 STDOUT terraform:  } 2025-09-10 00:01:52.286850 | orchestrator | 00:01:52.285 STDOUT terraform:  } 2025-09-10 00:01:52.286857 | orchestrator | 00:01:52.285 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-10 00:01:52.286864 | orchestrator | 00:01:52.285 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-10 00:01:52.286870 | orchestrator | 00:01:52.285 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.286880 | orchestrator | 00:01:52.285 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.290074 | orchestrator | 00:01:52.286 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.290097 | orchestrator | 00:01:52.286 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.290101 | orchestrator | 00:01:52.286 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.290106 | orchestrator | 00:01:52.287 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.290110 | orchestrator | 00:01:52.287 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.290114 | orchestrator | 00:01:52.287 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.290118 | orchestrator | 00:01:52.287 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.290122 | orchestrator | 00:01:52.287 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.290126 | orchestrator | 00:01:52.287 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.290135 | orchestrator | 00:01:52.287 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.290146 | orchestrator | 00:01:52.287 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.290154 | orchestrator | 00:01:52.287 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.290158 | orchestrator | 00:01:52.287 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.290161 | orchestrator | 00:01:52.287 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.290166 | orchestrator | 00:01:52.287 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290170 | orchestrator | 00:01:52.287 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.290174 | orchestrator | 00:01:52.287 STDOUT terraform:  } 2025-09-10 00:01:52.290178 | orchestrator | 00:01:52.287 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290182 | orchestrator | 00:01:52.287 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-10 00:01:52.290186 | orchestrator | 00:01:52.287 STDOUT terraform:  } 2025-09-10 00:01:52.290189 | orchestrator | 00:01:52.287 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290193 | orchestrator | 00:01:52.287 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.290197 | orchestrator | 00:01:52.287 STDOUT terraform:  } 2025-09-10 00:01:52.290201 | orchestrator | 00:01:52.287 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290205 | orchestrator | 00:01:52.287 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-10 00:01:52.290209 | orchestrator | 00:01:52.287 STDOUT terraform:  } 2025-09-10 00:01:52.290213 | orchestrator | 00:01:52.287 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.290216 | orchestrator | 00:01:52.287 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.290220 | orchestrator | 00:01:52.287 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-10 00:01:52.290224 | orchestrator | 00:01:52.287 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.290228 | orchestrator | 00:01:52.287 STDOUT terraform:  } 2025-09-10 00:01:52.290232 | orchestrator | 00:01:52.287 STDOUT terraform:  } 2025-09-10 00:01:52.290236 | orchestrator | 00:01:52.287 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-10 00:01:52.290240 | orchestrator | 00:01:52.287 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-10 00:01:52.290244 | orchestrator | 00:01:52.287 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.290248 | orchestrator | 00:01:52.287 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-10 00:01:52.290252 | orchestrator | 00:01:52.287 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-10 00:01:52.290256 | orchestrator | 00:01:52.287 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.290270 | orchestrator | 00:01:52.287 STDOUT terraform:  + device_id = (known after apply) 2025-09-10 00:01:52.290274 | orchestrator | 00:01:52.287 STDOUT terraform:  + device_owner = (known after apply) 2025-09-10 00:01:52.290278 | orchestrator | 00:01:52.287 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-10 00:01:52.290285 | orchestrator | 00:01:52.287 STDOUT terraform:  + dns_name = (known after apply) 2025-09-10 00:01:52.290289 | orchestrator | 00:01:52.288 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.290292 | orchestrator | 00:01:52.288 STDOUT terraform:  + mac_address = (known after apply) 2025-09-10 00:01:52.290296 | orchestrator | 00:01:52.288 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.290300 | orchestrator | 00:01:52.288 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-10 00:01:52.290304 | orchestrator | 00:01:52.288 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-10 00:01:52.290308 | orchestrator | 00:01:52.288 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.290312 | orchestrator | 00:01:52.288 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-10 00:01:52.290316 | orchestrator | 00:01:52.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.290320 | orchestrator | 00:01:52.288 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290324 | orchestrator | 00:01:52.288 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-10 00:01:52.290328 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290331 | orchestrator | 00:01:52.288 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290335 | orchestrator | 00:01:52.288 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-10 00:01:52.290339 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290343 | orchestrator | 00:01:52.288 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290346 | orchestrator | 00:01:52.288 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-10 00:01:52.290350 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290354 | orchestrator | 00:01:52.288 STDOUT terraform:  + allowed_address_pairs { 2025-09-10 00:01:52.290358 | orchestrator | 00:01:52.288 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-10 00:01:52.290362 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290365 | orchestrator | 00:01:52.288 STDOUT terraform:  + binding (known after apply) 2025-09-10 00:01:52.290369 | orchestrator | 00:01:52.288 STDOUT terraform:  + fixed_ip { 2025-09-10 00:01:52.290373 | orchestrator | 00:01:52.288 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-10 00:01:52.290377 | orchestrator | 00:01:52.288 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.290381 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290385 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290389 | orchestrator | 00:01:52.288 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-10 00:01:52.290392 | orchestrator | 00:01:52.288 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-10 00:01:52.290396 | orchestrator | 00:01:52.288 STDOUT terraform:  + force_destroy = false 2025-09-10 00:01:52.290400 | orchestrator | 00:01:52.288 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.290407 | orchestrator | 00:01:52.288 STDOUT terraform:  + port_id = (known after apply) 2025-09-10 00:01:52.290411 | orchestrator | 00:01:52.288 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.290415 | orchestrator | 00:01:52.288 STDOUT terraform:  + router_id = (known after apply) 2025-09-10 00:01:52.290419 | orchestrator | 00:01:52.288 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-10 00:01:52.290422 | orchestrator | 00:01:52.288 STDOUT terraform:  } 2025-09-10 00:01:52.290429 | orchestrator | 00:01:52.288 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-10 00:01:52.290433 | orchestrator | 00:01:52.288 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-10 00:01:52.290437 | orchestrator | 00:01:52.288 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-10 00:01:52.290441 | orchestrator | 00:01:52.288 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.290445 | orchestrator | 00:01:52.288 STDOUT terraform:  + availability_zone_hints = [ 2025-09-10 00:01:52.290450 | orchestrator | 00:01:52.288 STDOUT terraform:  + "nova", 2025-09-10 00:01:52.290454 | orchestrator | 00:01:52.288 STDOUT terraform:  ] 2025-09-10 00:01:52.290458 | orchestrator | 00:01:52.288 STDOUT terraform:  + distributed = (known after apply) 2025-09-10 00:01:52.290462 | orchestrator | 00:01:52.289 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-10 00:01:52.290465 | orchestrator | 00:01:52.289 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-10 00:01:52.290474 | orchestrator | 00:01:52.289 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-10 00:01:52.290478 | orchestrator | 00:01:52.289 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.290482 | orchestrator | 00:01:52.289 STDOUT terraform:  + name = "testbed" 2025-09-10 00:01:52.290486 | orchestrator | 00:01:52.289 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.290489 | orchestrator | 00:01:52.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.290493 | orchestrator | 00:01:52.289 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-10 00:01:52.290497 | orchestrator | 00:01:52.289 STDOUT terraform:  } 2025-09-10 00:01:52.290501 | orchestrator | 00:01:52.289 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-10 00:01:52.290505 | orchestrator | 00:01:52.289 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-10 00:01:52.290509 | orchestrator | 00:01:52.289 STDOUT terraform:  + description = "ssh" 2025-09-10 00:01:52.290513 | orchestrator | 00:01:52.289 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.290517 | orchestrator | 00:01:52.289 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.290521 | orchestrator | 00:01:52.289 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.290524 | orchestrator | 00:01:52.289 STDOUT terraform:  + port_range_max = 22 2025-09-10 00:01:52.290528 | orchestrator | 00:01:52.289 STDOUT terraform:  + port_range_min = 22 2025-09-10 00:01:52.290535 | orchestrator | 00:01:52.289 STDOUT terraform:  + protocol = "tcp" 2025-09-10 00:01:52.290539 | orchestrator | 00:01:52.289 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.290543 | orchestrator | 00:01:52.289 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.290546 | orchestrator | 00:01:52.289 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.290550 | orchestrator | 00:01:52.289 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.290554 | orchestrator | 00:01:52.289 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.290558 | orchestrator | 00:01:52.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.290562 | orchestrator | 00:01:52.289 STDOUT terraform:  } 2025-09-10 00:01:52.290565 | orchestrator | 00:01:52.289 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-10 00:01:52.290569 | orchestrator | 00:01:52.289 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-10 00:01:52.290573 | orchestrator | 00:01:52.289 STDOUT terraform:  + description = "wireguard" 2025-09-10 00:01:52.290577 | orchestrator | 00:01:52.289 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.290585 | orchestrator | 00:01:52.289 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.290589 | orchestrator | 00:01:52.289 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.290593 | orchestrator | 00:01:52.289 STDOUT terraform:  + port_range_max = 51820 2025-09-10 00:01:52.291694 | orchestrator | 00:01:52.289 STDOUT terraform:  + port_range_min = 51820 2025-09-10 00:01:52.291703 | orchestrator | 00:01:52.290 STDOUT terraform:  + protocol = "udp" 2025-09-10 00:01:52.291707 | orchestrator | 00:01:52.290 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.291711 | orchestrator | 00:01:52.290 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.291714 | orchestrator | 00:01:52.290 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.291718 | orchestrator | 00:01:52.290 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.291725 | orchestrator | 00:01:52.290 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.291729 | orchestrator | 00:01:52.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.291733 | orchestrator | 00:01:52.290 STDOUT terraform:  } 2025-09-10 00:01:52.291736 | orchestrator | 00:01:52.290 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-10 00:01:52.291740 | orchestrator | 00:01:52.290 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-10 00:01:52.291744 | orchestrator | 00:01:52.290 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.291748 | orchestrator | 00:01:52.290 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.291755 | orchestrator | 00:01:52.291 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.291759 | orchestrator | 00:01:52.291 STDOUT terraform:  + protocol = "tcp" 2025-09-10 00:01:52.291763 | orchestrator | 00:01:52.291 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.291767 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.291771 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.291774 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-10 00:01:52.291778 | orchestrator | 00:01:52.291 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.291782 | orchestrator | 00:01:52.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.291786 | orchestrator | 00:01:52.291 STDOUT terraform:  } 2025-09-10 00:01:52.291790 | orchestrator | 00:01:52.291 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-10 00:01:52.291793 | orchestrator | 00:01:52.291 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-10 00:01:52.291797 | orchestrator | 00:01:52.291 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.291801 | orchestrator | 00:01:52.291 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.291805 | orchestrator | 00:01:52.291 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.291809 | orchestrator | 00:01:52.291 STDOUT terraform:  + protocol = "udp" 2025-09-10 00:01:52.291812 | orchestrator | 00:01:52.291 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.291816 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.291820 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.291824 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-10 00:01:52.291827 | orchestrator | 00:01:52.291 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.291831 | orchestrator | 00:01:52.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.291835 | orchestrator | 00:01:52.291 STDOUT terraform:  } 2025-09-10 00:01:52.291841 | orchestrator | 00:01:52.291 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-10 00:01:52.291845 | orchestrator | 00:01:52.291 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-10 00:01:52.291849 | orchestrator | 00:01:52.291 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.291853 | orchestrator | 00:01:52.291 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.291856 | orchestrator | 00:01:52.291 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.291862 | orchestrator | 00:01:52.291 STDOUT terraform:  + protocol = "icmp" 2025-09-10 00:01:52.294073 | orchestrator | 00:01:52.291 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294094 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.294098 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.294102 | orchestrator | 00:01:52.291 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.294106 | orchestrator | 00:01:52.292 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.294110 | orchestrator | 00:01:52.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294114 | orchestrator | 00:01:52.292 STDOUT terraform:  } 2025-09-10 00:01:52.294118 | orchestrator | 00:01:52.292 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-10 00:01:52.294122 | orchestrator | 00:01:52.292 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-10 00:01:52.294125 | orchestrator | 00:01:52.292 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.294129 | orchestrator | 00:01:52.292 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.294133 | orchestrator | 00:01:52.292 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294137 | orchestrator | 00:01:52.292 STDOUT terraform:  + protocol = "tcp" 2025-09-10 00:01:52.294141 | orchestrator | 00:01:52.292 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294144 | orchestrator | 00:01:52.292 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.294148 | orchestrator | 00:01:52.292 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.294152 | orchestrator | 00:01:52.292 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.294156 | orchestrator | 00:01:52.292 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.294159 | orchestrator | 00:01:52.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294163 | orchestrator | 00:01:52.292 STDOUT terraform:  } 2025-09-10 00:01:52.294167 | orchestrator | 00:01:52.292 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-10 00:01:52.294171 | orchestrator | 00:01:52.292 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-10 00:01:52.294175 | orchestrator | 00:01:52.292 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.294178 | orchestrator | 00:01:52.292 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.294182 | orchestrator | 00:01:52.292 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294186 | orchestrator | 00:01:52.292 STDOUT terraform:  + protocol = "udp" 2025-09-10 00:01:52.294190 | orchestrator | 00:01:52.292 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294194 | orchestrator | 00:01:52.292 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.294197 | orchestrator | 00:01:52.292 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.294206 | orchestrator | 00:01:52.292 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.294210 | orchestrator | 00:01:52.292 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.294214 | orchestrator | 00:01:52.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294218 | orchestrator | 00:01:52.292 STDOUT terraform:  } 2025-09-10 00:01:52.294222 | orchestrator | 00:01:52.292 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-10 00:01:52.294226 | orchestrator | 00:01:52.292 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-10 00:01:52.294235 | orchestrator | 00:01:52.292 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.294239 | orchestrator | 00:01:52.292 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.294243 | orchestrator | 00:01:52.292 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294247 | orchestrator | 00:01:52.293 STDOUT terraform:  + protocol = "icmp" 2025-09-10 00:01:52.294250 | orchestrator | 00:01:52.293 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294254 | orchestrator | 00:01:52.293 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.294258 | orchestrator | 00:01:52.293 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.294262 | orchestrator | 00:01:52.293 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.294266 | orchestrator | 00:01:52.293 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.294269 | orchestrator | 00:01:52.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294273 | orchestrator | 00:01:52.293 STDOUT terraform:  } 2025-09-10 00:01:52.294277 | orchestrator | 00:01:52.293 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-10 00:01:52.294281 | orchestrator | 00:01:52.293 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-10 00:01:52.294285 | orchestrator | 00:01:52.293 STDOUT terraform:  + description = "vrrp" 2025-09-10 00:01:52.294289 | orchestrator | 00:01:52.293 STDOUT terraform:  + direction = "ingress" 2025-09-10 00:01:52.294293 | orchestrator | 00:01:52.293 STDOUT terraform:  + ethertype = "IPv4" 2025-09-10 00:01:52.294323 | orchestrator | 00:01:52.293 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294327 | orchestrator | 00:01:52.293 STDOUT terraform:  + protocol = "112" 2025-09-10 00:01:52.294331 | orchestrator | 00:01:52.293 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294335 | orchestrator | 00:01:52.293 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-10 00:01:52.294339 | orchestrator | 00:01:52.293 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-10 00:01:52.294342 | orchestrator | 00:01:52.293 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-10 00:01:52.294350 | orchestrator | 00:01:52.293 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-10 00:01:52.294354 | orchestrator | 00:01:52.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294357 | orchestrator | 00:01:52.293 STDOUT terraform:  } 2025-09-10 00:01:52.294361 | orchestrator | 00:01:52.293 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-10 00:01:52.294365 | orchestrator | 00:01:52.293 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-10 00:01:52.294369 | orchestrator | 00:01:52.293 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.294373 | orchestrator | 00:01:52.293 STDOUT terraform:  + description = "management security group" 2025-09-10 00:01:52.294377 | orchestrator | 00:01:52.293 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294381 | orchestrator | 00:01:52.293 STDOUT terraform:  + name = "testbed-management" 2025-09-10 00:01:52.294384 | orchestrator | 00:01:52.293 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294388 | orchestrator | 00:01:52.293 STDOUT terraform:  + stateful = (known after apply) 2025-09-10 00:01:52.294392 | orchestrator | 00:01:52.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294396 | orchestrator | 00:01:52.293 STDOUT terraform:  } 2025-09-10 00:01:52.294400 | orchestrator | 00:01:52.293 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-10 00:01:52.294406 | orchestrator | 00:01:52.293 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-10 00:01:52.294413 | orchestrator | 00:01:52.294 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.294417 | orchestrator | 00:01:52.294 STDOUT terraform:  + description = "node security group" 2025-09-10 00:01:52.294420 | orchestrator | 00:01:52.294 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294424 | orchestrator | 00:01:52.294 STDOUT terraform:  + name = "testbed-node" 2025-09-10 00:01:52.294428 | orchestrator | 00:01:52.294 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294432 | orchestrator | 00:01:52.294 STDOUT terraform:  + stateful = (known after apply) 2025-09-10 00:01:52.294436 | orchestrator | 00:01:52.294 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294439 | orchestrator | 00:01:52.294 STDOUT terraform:  } 2025-09-10 00:01:52.294443 | orchestrator | 00:01:52.294 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-10 00:01:52.294447 | orchestrator | 00:01:52.294 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-10 00:01:52.294451 | orchestrator | 00:01:52.294 STDOUT terraform:  + all_tags = (known after apply) 2025-09-10 00:01:52.294455 | orchestrator | 00:01:52.294 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-10 00:01:52.294459 | orchestrator | 00:01:52.294 STDOUT terraform:  + dns_nameservers = [ 2025-09-10 00:01:52.294463 | orchestrator | 00:01:52.294 STDOUT terraform:  + "8.8.8.8", 2025-09-10 00:01:52.294468 | orchestrator | 00:01:52.294 STDOUT terraform:  + "9.9.9.9", 2025-09-10 00:01:52.294476 | orchestrator | 00:01:52.294 STDOUT terraform:  ] 2025-09-10 00:01:52.294479 | orchestrator | 00:01:52.294 STDOUT terraform:  + enable_dhcp = true 2025-09-10 00:01:52.294483 | orchestrator | 00:01:52.294 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-10 00:01:52.294488 | orchestrator | 00:01:52.294 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294512 | orchestrator | 00:01:52.294 STDOUT terraform:  + ip_version = 4 2025-09-10 00:01:52.294541 | orchestrator | 00:01:52.294 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-10 00:01:52.294572 | orchestrator | 00:01:52.294 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-10 00:01:52.294609 | orchestrator | 00:01:52.294 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-10 00:01:52.294638 | orchestrator | 00:01:52.294 STDOUT terraform:  + network_id = (known after apply) 2025-09-10 00:01:52.294655 | orchestrator | 00:01:52.294 STDOUT terraform:  + no_gateway = false 2025-09-10 00:01:52.294684 | orchestrator | 00:01:52.294 STDOUT terraform:  + region = (known after apply) 2025-09-10 00:01:52.294713 | orchestrator | 00:01:52.294 STDOUT terraform:  + service_types = (known after apply) 2025-09-10 00:01:52.294742 | orchestrator | 00:01:52.294 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-10 00:01:52.294759 | orchestrator | 00:01:52.294 STDOUT terraform:  + allocation_pool { 2025-09-10 00:01:52.294780 | orchestrator | 00:01:52.294 STDOUT terraform:  + end = "192.168.31.250" 2025-09-10 00:01:52.294803 | orchestrator | 00:01:52.294 STDOUT terraform:  + start = "192.168.31.200" 2025-09-10 00:01:52.294809 | orchestrator | 00:01:52.294 STDOUT terraform:  } 2025-09-10 00:01:52.294814 | orchestrator | 00:01:52.294 STDOUT terraform:  } 2025-09-10 00:01:52.294844 | orchestrator | 00:01:52.294 STDOUT terraform:  # terraform_data.image will be created 2025-09-10 00:01:52.294870 | orchestrator | 00:01:52.294 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-10 00:01:52.294893 | orchestrator | 00:01:52.294 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.294910 | orchestrator | 00:01:52.294 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-10 00:01:52.294934 | orchestrator | 00:01:52.294 STDOUT terraform:  + output = (known after apply) 2025-09-10 00:01:52.294940 | orchestrator | 00:01:52.294 STDOUT terraform:  } 2025-09-10 00:01:52.294970 | orchestrator | 00:01:52.294 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-10 00:01:52.295016 | orchestrator | 00:01:52.294 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-10 00:01:52.295035 | orchestrator | 00:01:52.295 STDOUT terraform:  + id = (known after apply) 2025-09-10 00:01:52.295051 | orchestrator | 00:01:52.295 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-10 00:01:52.295196 | orchestrator | 00:01:52.295 STDOUT terraform:  + output = (known after apply) 2025-09-10 00:01:52.295272 | orchestrator | 00:01:52.295 STDOUT terraform:  } 2025-09-10 00:01:52.295287 | orchestrator | 00:01:52.295 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-10 00:01:52.295298 | orchestrator | 00:01:52.295 STDOUT terraform: Changes to Outputs: 2025-09-10 00:01:52.295327 | orchestrator | 00:01:52.295 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-10 00:01:52.295348 | orchestrator | 00:01:52.295 STDOUT terraform:  + private_key = (sensitive value) 2025-09-10 00:01:52.423981 | orchestrator | 00:01:52.423 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-10 00:01:52.473378 | orchestrator | 00:01:52.473 STDOUT terraform: terraform_data.image: Creating... 2025-09-10 00:01:52.473445 | orchestrator | 00:01:52.473 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=2c4d33f1-7a2e-f9a8-466b-c3d3520fcb5c] 2025-09-10 00:01:52.473478 | orchestrator | 00:01:52.473 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4e3cbde4-3a5a-c2ec-23bf-927497aa5f38] 2025-09-10 00:01:52.488585 | orchestrator | 00:01:52.488 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-10 00:01:52.489159 | orchestrator | 00:01:52.489 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-10 00:01:52.500305 | orchestrator | 00:01:52.500 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-10 00:01:52.500518 | orchestrator | 00:01:52.500 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-10 00:01:52.510738 | orchestrator | 00:01:52.510 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-10 00:01:52.511604 | orchestrator | 00:01:52.511 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-10 00:01:52.512441 | orchestrator | 00:01:52.512 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-10 00:01:52.513282 | orchestrator | 00:01:52.513 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-10 00:01:52.513636 | orchestrator | 00:01:52.513 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-10 00:01:52.514090 | orchestrator | 00:01:52.513 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-10 00:01:52.962353 | orchestrator | 00:01:52.962 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-10 00:01:52.966113 | orchestrator | 00:01:52.965 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-10 00:01:52.978275 | orchestrator | 00:01:52.978 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-10 00:01:52.982465 | orchestrator | 00:01:52.982 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-10 00:01:53.016578 | orchestrator | 00:01:53.016 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-10 00:01:53.020926 | orchestrator | 00:01:53.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-10 00:01:54.036282 | orchestrator | 00:01:54.035 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=83813b98-0def-42a7-998f-600e1e5513f8] 2025-09-10 00:01:54.044719 | orchestrator | 00:01:54.044 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-10 00:01:56.149523 | orchestrator | 00:01:56.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=b761a3bc-d220-47d2-9376-37cff0079757] 2025-09-10 00:01:56.164367 | orchestrator | 00:01:56.164 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=2ea24e78-5d32-46b6-abea-13531085710c] 2025-09-10 00:01:56.180105 | orchestrator | 00:01:56.177 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-10 00:01:56.180179 | orchestrator | 00:01:56.177 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-10 00:01:56.199031 | orchestrator | 00:01:56.198 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=d735b4b4-15bb-46a1-a658-203c9ec5fb9c] 2025-09-10 00:01:56.199103 | orchestrator | 00:01:56.198 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=9ffc2c63-19a1-4e5a-ab50-a28911e045bb] 2025-09-10 00:01:56.202176 | orchestrator | 00:01:56.202 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-10 00:01:56.204079 | orchestrator | 00:01:56.204 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-10 00:01:56.206233 | orchestrator | 00:01:56.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=b86cce0a-40ed-4a07-99f1-19becb84c901] 2025-09-10 00:01:56.212291 | orchestrator | 00:01:56.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=4a1ca5dc-6f40-4107-905c-b866d721086e] 2025-09-10 00:01:56.218275 | orchestrator | 00:01:56.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-10 00:01:56.218317 | orchestrator | 00:01:56.218 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-10 00:01:56.258202 | orchestrator | 00:01:56.254 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=4b58dc27-9074-4d6f-a7a6-fd15259a7e00] 2025-09-10 00:01:56.268109 | orchestrator | 00:01:56.267 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=15be4489-a4ef-46b7-8669-fa4e45790ef6] 2025-09-10 00:01:56.276407 | orchestrator | 00:01:56.276 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=2400494c-3cf2-4780-9c6b-527d492c6bfd] 2025-09-10 00:01:56.286189 | orchestrator | 00:01:56.286 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-10 00:01:56.286226 | orchestrator | 00:01:56.286 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-10 00:01:56.286232 | orchestrator | 00:01:56.286 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-10 00:01:56.289099 | orchestrator | 00:01:56.288 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c8dacc049b779f3fec7a92de82a4a26fff59fecc] 2025-09-10 00:01:56.289848 | orchestrator | 00:01:56.289 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=aa2b518eed3de1c9055070b7d63be0300bde6c3b] 2025-09-10 00:01:57.210118 | orchestrator | 00:01:57.209 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=381a1b27-5cdc-489b-bcb6-5c0865045bd2] 2025-09-10 00:01:57.220841 | orchestrator | 00:01:57.220 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-10 00:01:57.422238 | orchestrator | 00:01:57.421 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=76e3aa19-82bc-491d-ab0d-cf92a10f04de] 2025-09-10 00:01:59.584019 | orchestrator | 00:01:59.583 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=1d1bd04c-2fb3-49ff-963e-06aac3d99067] 2025-09-10 00:01:59.606155 | orchestrator | 00:01:59.605 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=885f5351-8a1d-42ab-b6e2-24d16c7f1b28] 2025-09-10 00:01:59.641575 | orchestrator | 00:01:59.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=42f8eac2-f52b-494e-b151-a2efb4a40d85] 2025-09-10 00:01:59.645230 | orchestrator | 00:01:59.644 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=ecdc9ada-f1b8-4b11-8385-281bf4902674] 2025-09-10 00:01:59.672161 | orchestrator | 00:01:59.671 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=7eb4c3bb-4118-4e06-ba0b-43b94bfc7779] 2025-09-10 00:01:59.672308 | orchestrator | 00:01:59.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=ea0201e7-6d46-42f0-8fca-6e90e492352f] 2025-09-10 00:02:01.372337 | orchestrator | 00:02:01.371 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=40f294b9-34ff-46e7-8a92-40a6a2c2df2c] 2025-09-10 00:02:01.381721 | orchestrator | 00:02:01.381 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-10 00:02:01.382587 | orchestrator | 00:02:01.382 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-10 00:02:01.382896 | orchestrator | 00:02:01.382 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-10 00:02:01.602780 | orchestrator | 00:02:01.602 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=f7fff4ec-6869-4820-aeed-0b32fadcb67b] 2025-09-10 00:02:01.620136 | orchestrator | 00:02:01.619 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-10 00:02:01.620243 | orchestrator | 00:02:01.620 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-10 00:02:01.620771 | orchestrator | 00:02:01.620 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-10 00:02:01.620881 | orchestrator | 00:02:01.620 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-10 00:02:01.622192 | orchestrator | 00:02:01.622 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-10 00:02:01.622806 | orchestrator | 00:02:01.622 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-10 00:02:01.627753 | orchestrator | 00:02:01.627 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=a6ea9432-9011-4a23-9dba-9bac9d975d73] 2025-09-10 00:02:01.632586 | orchestrator | 00:02:01.632 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-10 00:02:01.633047 | orchestrator | 00:02:01.632 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-10 00:02:01.634090 | orchestrator | 00:02:01.633 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-10 00:02:01.830728 | orchestrator | 00:02:01.830 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=8783edc1-ebef-4e44-9357-34a8d725b10e] 2025-09-10 00:02:01.837463 | orchestrator | 00:02:01.837 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-10 00:02:02.043427 | orchestrator | 00:02:02.043 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=3f1b1ec4-1d87-49de-a388-f03da5c60fa8] 2025-09-10 00:02:02.057736 | orchestrator | 00:02:02.057 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-10 00:02:02.210126 | orchestrator | 00:02:02.209 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=cdbc4061-cb6b-481e-928a-8996940c7df4] 2025-09-10 00:02:02.228189 | orchestrator | 00:02:02.227 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-10 00:02:02.237506 | orchestrator | 00:02:02.237 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=3902540c-fcc3-40e3-b8b3-1541ae0b8d15] 2025-09-10 00:02:02.240335 | orchestrator | 00:02:02.240 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d51dd196-9510-42bf-87f6-de5d8d5d9e11] 2025-09-10 00:02:02.250799 | orchestrator | 00:02:02.250 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-10 00:02:02.255307 | orchestrator | 00:02:02.255 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-10 00:02:02.387238 | orchestrator | 00:02:02.386 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=40ff9e67-7e9a-43ae-bda9-17888cca8c26] 2025-09-10 00:02:02.403350 | orchestrator | 00:02:02.403 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-10 00:02:02.437865 | orchestrator | 00:02:02.437 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=ac66e86b-74b8-4228-9a42-cd78fa3ae24b] 2025-09-10 00:02:02.454151 | orchestrator | 00:02:02.452 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-10 00:02:02.872141 | orchestrator | 00:02:02.871 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=8ab8f59b-192f-4d0c-a978-60bda9083ce0] 2025-09-10 00:02:03.161052 | orchestrator | 00:02:03.160 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c0953458-b6a7-4e67-ac17-2eb388eae315] 2025-09-10 00:02:03.221164 | orchestrator | 00:02:03.220 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=d0ddf00c-b569-4bab-a422-842d7ffe3ff5] 2025-09-10 00:02:03.270216 | orchestrator | 00:02:03.269 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=2fbcb111-65a9-4282-bdd6-de554099db77] 2025-09-10 00:02:03.334344 | orchestrator | 00:02:03.333 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3b4eb3b4-8de9-4ea8-8c9c-36525fa02cb9] 2025-09-10 00:02:03.451883 | orchestrator | 00:02:03.451 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=cf4a8fed-31db-4e1c-8a02-c67330750f68] 2025-09-10 00:02:03.717876 | orchestrator | 00:02:03.717 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=e5adc2b3-177f-4112-b06c-c28375e28b84] 2025-09-10 00:02:03.985775 | orchestrator | 00:02:03.985 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=11b852f2-1ba2-44c8-980c-9b22bc1eb804] 2025-09-10 00:02:03.992067 | orchestrator | 00:02:03.991 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-10 00:02:04.681681 | orchestrator | 00:02:04.681 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 3s [id=663845bb-a1ed-4fc3-afb1-7ed7d6c8216c] 2025-09-10 00:02:04.724206 | orchestrator | 00:02:04.723 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 3s [id=8f563b57-7b67-4326-9286-17a444c34370] 2025-09-10 00:02:04.759745 | orchestrator | 00:02:04.759 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-10 00:02:04.769688 | orchestrator | 00:02:04.769 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-10 00:02:04.773427 | orchestrator | 00:02:04.773 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-10 00:02:04.775329 | orchestrator | 00:02:04.775 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-10 00:02:04.775500 | orchestrator | 00:02:04.775 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-10 00:02:04.777900 | orchestrator | 00:02:04.777 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-10 00:02:06.426343 | orchestrator | 00:02:06.425 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=4d3f560a-fa0e-4d9e-9197-c1dae99c08fc] 2025-09-10 00:02:06.440216 | orchestrator | 00:02:06.439 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-10 00:02:06.446414 | orchestrator | 00:02:06.446 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-10 00:02:06.446675 | orchestrator | 00:02:06.446 STDOUT terraform: local_file.inventory: Creating... 2025-09-10 00:02:06.455790 | orchestrator | 00:02:06.455 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=0b014978e08ca171d2ec3101dd225152a0565282] 2025-09-10 00:02:06.456263 | orchestrator | 00:02:06.456 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c6cff4a53b4398550a1d58bdafac0a11f8770783] 2025-09-10 00:02:08.075068 | orchestrator | 00:02:08.074 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=4d3f560a-fa0e-4d9e-9197-c1dae99c08fc] 2025-09-10 00:02:14.761861 | orchestrator | 00:02:14.761 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-10 00:02:14.778081 | orchestrator | 00:02:14.777 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-10 00:02:14.779178 | orchestrator | 00:02:14.778 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-10 00:02:14.780338 | orchestrator | 00:02:14.780 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-10 00:02:14.780415 | orchestrator | 00:02:14.780 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-10 00:02:14.780716 | orchestrator | 00:02:14.780 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-10 00:02:24.765656 | orchestrator | 00:02:24.765 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-10 00:02:24.778777 | orchestrator | 00:02:24.778 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-10 00:02:24.779950 | orchestrator | 00:02:24.779 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-10 00:02:24.781367 | orchestrator | 00:02:24.780 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-10 00:02:24.781428 | orchestrator | 00:02:24.781 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-10 00:02:24.781644 | orchestrator | 00:02:24.781 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-10 00:02:25.336767 | orchestrator | 00:02:25.335 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=154cee7d-9d64-4d73-b20b-da5306cb3591] 2025-09-10 00:02:25.344078 | orchestrator | 00:02:25.343 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=bf867a1f-be8a-47b8-93b9-e567007b14d6] 2025-09-10 00:02:25.544920 | orchestrator | 00:02:25.544 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=0d4bb427-ab00-499d-81c1-0a38c576bf7a] 2025-09-10 00:02:34.781695 | orchestrator | 00:02:34.781 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-10 00:02:34.781840 | orchestrator | 00:02:34.781 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-10 00:02:34.781857 | orchestrator | 00:02:34.781 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-10 00:02:35.386742 | orchestrator | 00:02:35.386 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=56d78b89-50aa-4eec-92c2-2d5eb26444c6] 2025-09-10 00:02:35.425777 | orchestrator | 00:02:35.425 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=ad8c401b-2a64-4735-aff3-00a9c416e596] 2025-09-10 00:02:35.498843 | orchestrator | 00:02:35.498 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=53227b8f-ea5c-4d52-9a9e-92cb403ca036] 2025-09-10 00:02:35.534401 | orchestrator | 00:02:35.534 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-10 00:02:35.534487 | orchestrator | 00:02:35.534 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4031284981704820662] 2025-09-10 00:02:35.537821 | orchestrator | 00:02:35.537 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-10 00:02:35.538739 | orchestrator | 00:02:35.538 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-10 00:02:35.539712 | orchestrator | 00:02:35.539 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-10 00:02:35.540641 | orchestrator | 00:02:35.540 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-10 00:02:35.542148 | orchestrator | 00:02:35.541 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-10 00:02:35.544977 | orchestrator | 00:02:35.544 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-10 00:02:35.545021 | orchestrator | 00:02:35.544 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-10 00:02:35.555362 | orchestrator | 00:02:35.555 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-10 00:02:35.561578 | orchestrator | 00:02:35.561 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-10 00:02:35.562665 | orchestrator | 00:02:35.562 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-10 00:02:38.959563 | orchestrator | 00:02:38.959 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=56d78b89-50aa-4eec-92c2-2d5eb26444c6/d735b4b4-15bb-46a1-a658-203c9ec5fb9c] 2025-09-10 00:02:38.969153 | orchestrator | 00:02:38.968 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=ad8c401b-2a64-4735-aff3-00a9c416e596/b761a3bc-d220-47d2-9376-37cff0079757] 2025-09-10 00:02:38.987367 | orchestrator | 00:02:38.986 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=154cee7d-9d64-4d73-b20b-da5306cb3591/9ffc2c63-19a1-4e5a-ab50-a28911e045bb] 2025-09-10 00:02:45.068304 | orchestrator | 00:02:45.067 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=ad8c401b-2a64-4735-aff3-00a9c416e596/2400494c-3cf2-4780-9c6b-527d492c6bfd] 2025-09-10 00:02:45.077570 | orchestrator | 00:02:45.077 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=56d78b89-50aa-4eec-92c2-2d5eb26444c6/b86cce0a-40ed-4a07-99f1-19becb84c901] 2025-09-10 00:02:45.108897 | orchestrator | 00:02:45.108 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=154cee7d-9d64-4d73-b20b-da5306cb3591/4a1ca5dc-6f40-4107-905c-b866d721086e] 2025-09-10 00:02:45.254179 | orchestrator | 00:02:45.253 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=56d78b89-50aa-4eec-92c2-2d5eb26444c6/2ea24e78-5d32-46b6-abea-13531085710c] 2025-09-10 00:02:45.274360 | orchestrator | 00:02:45.273 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=ad8c401b-2a64-4735-aff3-00a9c416e596/15be4489-a4ef-46b7-8669-fa4e45790ef6] 2025-09-10 00:02:45.283824 | orchestrator | 00:02:45.283 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=154cee7d-9d64-4d73-b20b-da5306cb3591/4b58dc27-9074-4d6f-a7a6-fd15259a7e00] 2025-09-10 00:02:45.565890 | orchestrator | 00:02:45.565 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-10 00:02:55.566814 | orchestrator | 00:02:55.566 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-10 00:02:55.943500 | orchestrator | 00:02:55.943 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=31185803-59b7-47fd-a48f-e529533259c1] 2025-09-10 00:02:55.971119 | orchestrator | 00:02:55.970 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-10 00:02:55.971196 | orchestrator | 00:02:55.971 STDOUT terraform: Outputs: 2025-09-10 00:02:55.971209 | orchestrator | 00:02:55.971 STDOUT terraform: manager_address = 2025-09-10 00:02:55.971218 | orchestrator | 00:02:55.971 STDOUT terraform: private_key = 2025-09-10 00:02:56.330117 | orchestrator | ok: Runtime: 0:01:09.342991 2025-09-10 00:02:56.363395 | 2025-09-10 00:02:56.363519 | TASK [Create infrastructure (stable)] 2025-09-10 00:02:56.896794 | orchestrator | skipping: Conditional result was False 2025-09-10 00:02:56.909216 | 2025-09-10 00:02:56.909345 | TASK [Fetch manager address] 2025-09-10 00:02:57.332391 | orchestrator | ok 2025-09-10 00:02:57.342762 | 2025-09-10 00:02:57.342907 | TASK [Set manager_host address] 2025-09-10 00:02:57.423290 | orchestrator | ok 2025-09-10 00:02:57.433868 | 2025-09-10 00:02:57.434013 | LOOP [Update ansible collections] 2025-09-10 00:03:03.677609 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-10 00:03:03.677862 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-10 00:03:03.677897 | orchestrator | Starting galaxy collection install process 2025-09-10 00:03:03.677921 | orchestrator | Process install dependency map 2025-09-10 00:03:03.677942 | orchestrator | Starting collection install process 2025-09-10 00:03:03.677962 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-10 00:03:03.677987 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-10 00:03:03.678011 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-10 00:03:03.678058 | orchestrator | ok: Item: commons Runtime: 0:00:05.936389 2025-09-10 00:03:07.323708 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-10 00:03:07.323874 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-10 00:03:07.323911 | orchestrator | Starting galaxy collection install process 2025-09-10 00:03:07.323935 | orchestrator | Process install dependency map 2025-09-10 00:03:07.323957 | orchestrator | Starting collection install process 2025-09-10 00:03:07.323976 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-10 00:03:07.323997 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-10 00:03:07.324030 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-10 00:03:07.324065 | orchestrator | ok: Item: services Runtime: 0:00:03.385615 2025-09-10 00:03:07.341821 | 2025-09-10 00:03:07.341942 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-10 00:03:17.888670 | orchestrator | ok 2025-09-10 00:03:17.900279 | 2025-09-10 00:03:17.900398 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-10 00:04:17.937412 | orchestrator | ok 2025-09-10 00:04:17.948436 | 2025-09-10 00:04:17.948554 | TASK [Fetch manager ssh hostkey] 2025-09-10 00:04:19.519780 | orchestrator | Output suppressed because no_log was given 2025-09-10 00:04:19.535123 | 2025-09-10 00:04:19.535307 | TASK [Get ssh keypair from terraform environment] 2025-09-10 00:04:20.071866 | orchestrator | ok: Runtime: 0:00:00.010210 2025-09-10 00:04:20.090220 | 2025-09-10 00:04:20.090386 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-10 00:04:20.140832 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-10 00:04:20.151077 | 2025-09-10 00:04:20.151237 | TASK [Run manager part 0] 2025-09-10 00:04:21.589188 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-10 00:04:21.743068 | orchestrator | 2025-09-10 00:04:21.743129 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-10 00:04:21.743141 | orchestrator | 2025-09-10 00:04:21.743161 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-10 00:04:23.820723 | orchestrator | ok: [testbed-manager] 2025-09-10 00:04:23.820791 | orchestrator | 2025-09-10 00:04:23.820827 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-10 00:04:23.820844 | orchestrator | 2025-09-10 00:04:23.820859 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:04:25.751532 | orchestrator | ok: [testbed-manager] 2025-09-10 00:04:25.751584 | orchestrator | 2025-09-10 00:04:25.751592 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-10 00:04:26.424223 | orchestrator | ok: [testbed-manager] 2025-09-10 00:04:26.424272 | orchestrator | 2025-09-10 00:04:26.424281 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-10 00:04:26.486457 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.486490 | orchestrator | 2025-09-10 00:04:26.486499 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-10 00:04:26.521269 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.521290 | orchestrator | 2025-09-10 00:04:26.521296 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-10 00:04:26.550781 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.550798 | orchestrator | 2025-09-10 00:04:26.550803 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-10 00:04:26.577523 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.577537 | orchestrator | 2025-09-10 00:04:26.577542 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-10 00:04:26.607561 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.607579 | orchestrator | 2025-09-10 00:04:26.607584 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-10 00:04:26.637834 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.637847 | orchestrator | 2025-09-10 00:04:26.637854 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-10 00:04:26.664999 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:04:26.665013 | orchestrator | 2025-09-10 00:04:26.665018 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-10 00:04:27.436528 | orchestrator | changed: [testbed-manager] 2025-09-10 00:04:27.436581 | orchestrator | 2025-09-10 00:04:27.436590 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-10 00:06:56.509604 | orchestrator | changed: [testbed-manager] 2025-09-10 00:06:56.509735 | orchestrator | 2025-09-10 00:06:56.509756 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-10 00:08:24.356468 | orchestrator | changed: [testbed-manager] 2025-09-10 00:08:24.356540 | orchestrator | 2025-09-10 00:08:24.356550 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-10 00:08:44.546407 | orchestrator | changed: [testbed-manager] 2025-09-10 00:08:44.546490 | orchestrator | 2025-09-10 00:08:44.546508 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-10 00:08:53.609934 | orchestrator | changed: [testbed-manager] 2025-09-10 00:08:53.610092 | orchestrator | 2025-09-10 00:08:53.610114 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-10 00:08:53.656835 | orchestrator | ok: [testbed-manager] 2025-09-10 00:08:53.657035 | orchestrator | 2025-09-10 00:08:53.657058 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-10 00:08:54.463857 | orchestrator | ok: [testbed-manager] 2025-09-10 00:08:54.463900 | orchestrator | 2025-09-10 00:08:54.463911 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-10 00:08:55.207428 | orchestrator | changed: [testbed-manager] 2025-09-10 00:08:55.207515 | orchestrator | 2025-09-10 00:08:55.207530 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-10 00:09:01.546577 | orchestrator | changed: [testbed-manager] 2025-09-10 00:09:01.546803 | orchestrator | 2025-09-10 00:09:01.546851 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-10 00:09:07.557488 | orchestrator | changed: [testbed-manager] 2025-09-10 00:09:07.557535 | orchestrator | 2025-09-10 00:09:07.557546 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-10 00:09:10.269997 | orchestrator | changed: [testbed-manager] 2025-09-10 00:09:10.270119 | orchestrator | 2025-09-10 00:09:10.270139 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-10 00:09:11.979298 | orchestrator | changed: [testbed-manager] 2025-09-10 00:09:11.979377 | orchestrator | 2025-09-10 00:09:11.979392 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-10 00:09:13.101225 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-10 00:09:13.101314 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-10 00:09:13.101329 | orchestrator | 2025-09-10 00:09:13.101341 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-10 00:09:13.139466 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-10 00:09:13.139545 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-10 00:09:13.139560 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-10 00:09:13.139572 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-10 00:09:17.209189 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-10 00:09:17.209266 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-10 00:09:17.209279 | orchestrator | 2025-09-10 00:09:17.209289 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-10 00:09:17.777289 | orchestrator | changed: [testbed-manager] 2025-09-10 00:09:17.777373 | orchestrator | 2025-09-10 00:09:17.777389 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-10 00:11:38.713116 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-10 00:11:38.713209 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-10 00:11:38.713227 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-10 00:11:38.713240 | orchestrator | 2025-09-10 00:11:38.713253 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-10 00:11:41.059722 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-10 00:11:41.059805 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-10 00:11:41.059822 | orchestrator | 2025-09-10 00:11:41.059834 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-10 00:11:41.059846 | orchestrator | 2025-09-10 00:11:41.059858 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:11:42.467387 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:42.467468 | orchestrator | 2025-09-10 00:11:42.467487 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-10 00:11:42.514278 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:42.514334 | orchestrator | 2025-09-10 00:11:42.514345 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-10 00:11:42.583178 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:42.583222 | orchestrator | 2025-09-10 00:11:42.583228 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-10 00:11:43.341169 | orchestrator | changed: [testbed-manager] 2025-09-10 00:11:43.341217 | orchestrator | 2025-09-10 00:11:43.341226 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-10 00:11:44.136897 | orchestrator | changed: [testbed-manager] 2025-09-10 00:11:44.137623 | orchestrator | 2025-09-10 00:11:44.137662 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-10 00:11:45.550466 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-10 00:11:45.550522 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-10 00:11:45.550530 | orchestrator | 2025-09-10 00:11:45.550546 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-10 00:11:46.953653 | orchestrator | changed: [testbed-manager] 2025-09-10 00:11:46.953749 | orchestrator | 2025-09-10 00:11:46.953768 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-10 00:11:48.697499 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:11:48.697544 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-10 00:11:48.697551 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:11:48.697557 | orchestrator | 2025-09-10 00:11:48.697564 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-10 00:11:48.748667 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:48.748756 | orchestrator | 2025-09-10 00:11:48.748773 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-10 00:11:49.332542 | orchestrator | changed: [testbed-manager] 2025-09-10 00:11:49.333182 | orchestrator | 2025-09-10 00:11:49.333207 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-10 00:11:49.400462 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:49.400543 | orchestrator | 2025-09-10 00:11:49.400560 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-10 00:11:50.364910 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-10 00:11:50.364987 | orchestrator | changed: [testbed-manager] 2025-09-10 00:11:50.365002 | orchestrator | 2025-09-10 00:11:50.365012 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-10 00:11:50.403517 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:50.403584 | orchestrator | 2025-09-10 00:11:50.403599 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-10 00:11:50.437620 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:50.437685 | orchestrator | 2025-09-10 00:11:50.437698 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-10 00:11:50.466346 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:50.466402 | orchestrator | 2025-09-10 00:11:50.466415 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-10 00:11:50.510506 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:50.510561 | orchestrator | 2025-09-10 00:11:50.510579 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-10 00:11:51.268013 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:51.268059 | orchestrator | 2025-09-10 00:11:51.268065 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-10 00:11:51.268070 | orchestrator | 2025-09-10 00:11:51.268074 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:11:52.746895 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:52.746960 | orchestrator | 2025-09-10 00:11:52.746976 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-10 00:11:53.704262 | orchestrator | changed: [testbed-manager] 2025-09-10 00:11:53.704340 | orchestrator | 2025-09-10 00:11:53.704356 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:11:53.704369 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-10 00:11:53.704380 | orchestrator | 2025-09-10 00:11:53.965625 | orchestrator | ok: Runtime: 0:07:33.337962 2025-09-10 00:11:53.981268 | 2025-09-10 00:11:53.981399 | TASK [Point out that the log in on the manager is now possible] 2025-09-10 00:11:54.016262 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-10 00:11:54.024772 | 2025-09-10 00:11:54.024878 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-10 00:11:54.063514 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-10 00:11:54.073131 | 2025-09-10 00:11:54.073249 | TASK [Run manager part 1 + 2] 2025-09-10 00:11:54.890644 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-10 00:11:54.942896 | orchestrator | 2025-09-10 00:11:54.942940 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-10 00:11:54.942947 | orchestrator | 2025-09-10 00:11:54.942960 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:11:57.875568 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:57.875618 | orchestrator | 2025-09-10 00:11:57.875638 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-10 00:11:57.910175 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:57.910215 | orchestrator | 2025-09-10 00:11:57.910224 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-10 00:11:57.948968 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:57.949018 | orchestrator | 2025-09-10 00:11:57.949028 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-10 00:11:57.986665 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:57.986706 | orchestrator | 2025-09-10 00:11:57.986714 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-10 00:11:58.046729 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:58.046771 | orchestrator | 2025-09-10 00:11:58.046781 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-10 00:11:58.098943 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:58.098982 | orchestrator | 2025-09-10 00:11:58.098991 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-10 00:11:58.134808 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-10 00:11:58.134835 | orchestrator | 2025-09-10 00:11:58.134841 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-10 00:11:58.798642 | orchestrator | ok: [testbed-manager] 2025-09-10 00:11:58.798687 | orchestrator | 2025-09-10 00:11:58.798697 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-10 00:11:58.844806 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:11:58.844844 | orchestrator | 2025-09-10 00:11:58.844851 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-10 00:12:00.126829 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:00.126873 | orchestrator | 2025-09-10 00:12:00.126882 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-10 00:12:00.704635 | orchestrator | ok: [testbed-manager] 2025-09-10 00:12:00.704681 | orchestrator | 2025-09-10 00:12:00.704691 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-10 00:12:01.875386 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:01.875425 | orchestrator | 2025-09-10 00:12:01.875433 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-10 00:12:18.925474 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:18.925558 | orchestrator | 2025-09-10 00:12:18.925574 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-10 00:12:19.607067 | orchestrator | ok: [testbed-manager] 2025-09-10 00:12:19.607184 | orchestrator | 2025-09-10 00:12:19.607203 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-10 00:12:19.658162 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:12:19.658352 | orchestrator | 2025-09-10 00:12:19.658374 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-10 00:12:20.596732 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:20.596782 | orchestrator | 2025-09-10 00:12:20.596791 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-10 00:12:21.569888 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:21.570547 | orchestrator | 2025-09-10 00:12:21.570571 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-10 00:12:22.127120 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:22.127195 | orchestrator | 2025-09-10 00:12:22.127204 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-10 00:12:22.168334 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-10 00:12:22.168432 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-10 00:12:22.168448 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-10 00:12:22.168460 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-10 00:12:26.272065 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:26.272210 | orchestrator | 2025-09-10 00:12:26.272231 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-10 00:12:35.490076 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-10 00:12:35.490164 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-10 00:12:35.490178 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-10 00:12:35.490187 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-10 00:12:35.490200 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-10 00:12:35.490208 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-10 00:12:35.490217 | orchestrator | 2025-09-10 00:12:35.490226 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-10 00:12:36.511889 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:36.511975 | orchestrator | 2025-09-10 00:12:36.511990 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-10 00:12:36.554573 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:12:36.554647 | orchestrator | 2025-09-10 00:12:36.554660 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-10 00:12:40.362904 | orchestrator | changed: [testbed-manager] 2025-09-10 00:12:40.362978 | orchestrator | 2025-09-10 00:12:40.362993 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-10 00:12:40.402888 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:12:40.402950 | orchestrator | 2025-09-10 00:12:40.402963 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-10 00:14:21.020242 | orchestrator | changed: [testbed-manager] 2025-09-10 00:14:21.020461 | orchestrator | 2025-09-10 00:14:21.020483 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-10 00:14:22.176663 | orchestrator | ok: [testbed-manager] 2025-09-10 00:14:22.176706 | orchestrator | 2025-09-10 00:14:22.176715 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:14:22.176724 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-10 00:14:22.176731 | orchestrator | 2025-09-10 00:14:22.700408 | orchestrator | ok: Runtime: 0:02:27.880062 2025-09-10 00:14:22.718390 | 2025-09-10 00:14:22.718559 | TASK [Reboot manager] 2025-09-10 00:14:24.255120 | orchestrator | ok: Runtime: 0:00:00.989069 2025-09-10 00:14:24.271855 | 2025-09-10 00:14:24.272007 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-10 00:14:40.682295 | orchestrator | ok 2025-09-10 00:14:40.691711 | 2025-09-10 00:14:40.691824 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-10 00:15:40.726260 | orchestrator | ok 2025-09-10 00:15:40.735985 | 2025-09-10 00:15:40.736100 | TASK [Deploy manager + bootstrap nodes] 2025-09-10 00:15:43.394417 | orchestrator | 2025-09-10 00:15:43.394707 | orchestrator | # DEPLOY MANAGER 2025-09-10 00:15:43.394738 | orchestrator | 2025-09-10 00:15:43.394793 | orchestrator | + set -e 2025-09-10 00:15:43.394809 | orchestrator | + echo 2025-09-10 00:15:43.394824 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-10 00:15:43.394842 | orchestrator | + echo 2025-09-10 00:15:43.394892 | orchestrator | + cat /opt/manager-vars.sh 2025-09-10 00:15:43.398298 | orchestrator | export NUMBER_OF_NODES=6 2025-09-10 00:15:43.398382 | orchestrator | 2025-09-10 00:15:43.398397 | orchestrator | export CEPH_VERSION=reef 2025-09-10 00:15:43.398410 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-10 00:15:43.398421 | orchestrator | export MANAGER_VERSION=latest 2025-09-10 00:15:43.398448 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-10 00:15:43.398458 | orchestrator | 2025-09-10 00:15:43.398475 | orchestrator | export ARA=false 2025-09-10 00:15:43.398486 | orchestrator | export DEPLOY_MODE=manager 2025-09-10 00:15:43.398502 | orchestrator | export TEMPEST=true 2025-09-10 00:15:43.398512 | orchestrator | export IS_ZUUL=true 2025-09-10 00:15:43.398522 | orchestrator | 2025-09-10 00:15:43.398538 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:15:43.398548 | orchestrator | export EXTERNAL_API=false 2025-09-10 00:15:43.398558 | orchestrator | 2025-09-10 00:15:43.398568 | orchestrator | export IMAGE_USER=ubuntu 2025-09-10 00:15:43.398601 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-10 00:15:43.398611 | orchestrator | 2025-09-10 00:15:43.398621 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-10 00:15:43.398708 | orchestrator | 2025-09-10 00:15:43.398740 | orchestrator | + echo 2025-09-10 00:15:43.398753 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-10 00:15:43.399672 | orchestrator | ++ export INTERACTIVE=false 2025-09-10 00:15:43.399692 | orchestrator | ++ INTERACTIVE=false 2025-09-10 00:15:43.399742 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-10 00:15:43.399756 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-10 00:15:43.400243 | orchestrator | + source /opt/manager-vars.sh 2025-09-10 00:15:43.400316 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-10 00:15:43.400330 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-10 00:15:43.400340 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-10 00:15:43.400350 | orchestrator | ++ CEPH_VERSION=reef 2025-09-10 00:15:43.400489 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-10 00:15:43.400506 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-10 00:15:43.400516 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-10 00:15:43.400526 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-10 00:15:43.400536 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-10 00:15:43.400557 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-10 00:15:43.400613 | orchestrator | ++ export ARA=false 2025-09-10 00:15:43.400636 | orchestrator | ++ ARA=false 2025-09-10 00:15:43.400646 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-10 00:15:43.400656 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-10 00:15:43.400718 | orchestrator | ++ export TEMPEST=true 2025-09-10 00:15:43.400742 | orchestrator | ++ TEMPEST=true 2025-09-10 00:15:43.400789 | orchestrator | ++ export IS_ZUUL=true 2025-09-10 00:15:43.400802 | orchestrator | ++ IS_ZUUL=true 2025-09-10 00:15:43.401034 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:15:43.401055 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:15:43.401065 | orchestrator | ++ export EXTERNAL_API=false 2025-09-10 00:15:43.401075 | orchestrator | ++ EXTERNAL_API=false 2025-09-10 00:15:43.401085 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-10 00:15:43.401095 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-10 00:15:43.401105 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-10 00:15:43.401165 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-10 00:15:43.401267 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-10 00:15:43.401281 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-10 00:15:43.401546 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-10 00:15:43.465416 | orchestrator | + docker version 2025-09-10 00:15:43.738863 | orchestrator | Client: Docker Engine - Community 2025-09-10 00:15:43.738949 | orchestrator | Version: 27.5.1 2025-09-10 00:15:43.738963 | orchestrator | API version: 1.47 2025-09-10 00:15:43.738976 | orchestrator | Go version: go1.22.11 2025-09-10 00:15:43.738988 | orchestrator | Git commit: 9f9e405 2025-09-10 00:15:43.738999 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-10 00:15:43.739011 | orchestrator | OS/Arch: linux/amd64 2025-09-10 00:15:43.739021 | orchestrator | Context: default 2025-09-10 00:15:43.739032 | orchestrator | 2025-09-10 00:15:43.739043 | orchestrator | Server: Docker Engine - Community 2025-09-10 00:15:43.739055 | orchestrator | Engine: 2025-09-10 00:15:43.739066 | orchestrator | Version: 27.5.1 2025-09-10 00:15:43.739077 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-10 00:15:43.739116 | orchestrator | Go version: go1.22.11 2025-09-10 00:15:43.739266 | orchestrator | Git commit: 4c9b3b0 2025-09-10 00:15:43.739285 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-10 00:15:43.739296 | orchestrator | OS/Arch: linux/amd64 2025-09-10 00:15:43.739307 | orchestrator | Experimental: false 2025-09-10 00:15:43.739318 | orchestrator | containerd: 2025-09-10 00:15:43.739329 | orchestrator | Version: 1.7.27 2025-09-10 00:15:43.739341 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-10 00:15:43.739353 | orchestrator | runc: 2025-09-10 00:15:43.739364 | orchestrator | Version: 1.2.5 2025-09-10 00:15:43.739375 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-10 00:15:43.739386 | orchestrator | docker-init: 2025-09-10 00:15:43.739432 | orchestrator | Version: 0.19.0 2025-09-10 00:15:43.739447 | orchestrator | GitCommit: de40ad0 2025-09-10 00:15:43.742728 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-10 00:15:43.752744 | orchestrator | + set -e 2025-09-10 00:15:43.752791 | orchestrator | + source /opt/manager-vars.sh 2025-09-10 00:15:43.752818 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-10 00:15:43.752976 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-10 00:15:43.752993 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-10 00:15:43.753005 | orchestrator | ++ CEPH_VERSION=reef 2025-09-10 00:15:43.753016 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-10 00:15:43.753028 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-10 00:15:43.753039 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-10 00:15:43.753049 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-10 00:15:43.753060 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-10 00:15:43.753082 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-10 00:15:43.753094 | orchestrator | ++ export ARA=false 2025-09-10 00:15:43.753105 | orchestrator | ++ ARA=false 2025-09-10 00:15:43.753116 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-10 00:15:43.753126 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-10 00:15:43.753137 | orchestrator | ++ export TEMPEST=true 2025-09-10 00:15:43.753148 | orchestrator | ++ TEMPEST=true 2025-09-10 00:15:43.753159 | orchestrator | ++ export IS_ZUUL=true 2025-09-10 00:15:43.753227 | orchestrator | ++ IS_ZUUL=true 2025-09-10 00:15:43.753240 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:15:43.753251 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:15:43.753262 | orchestrator | ++ export EXTERNAL_API=false 2025-09-10 00:15:43.753273 | orchestrator | ++ EXTERNAL_API=false 2025-09-10 00:15:43.753283 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-10 00:15:43.753294 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-10 00:15:43.753305 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-10 00:15:43.753315 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-10 00:15:43.753326 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-10 00:15:43.753337 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-10 00:15:43.753361 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-10 00:15:43.753383 | orchestrator | ++ export INTERACTIVE=false 2025-09-10 00:15:43.753395 | orchestrator | ++ INTERACTIVE=false 2025-09-10 00:15:43.753406 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-10 00:15:43.753421 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-10 00:15:43.753516 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-10 00:15:43.753532 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-10 00:15:43.753543 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-10 00:15:43.760309 | orchestrator | + set -e 2025-09-10 00:15:43.760336 | orchestrator | + VERSION=reef 2025-09-10 00:15:43.761678 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-10 00:15:43.767528 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-10 00:15:43.767554 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-10 00:15:43.773828 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-10 00:15:43.780707 | orchestrator | + set -e 2025-09-10 00:15:43.781349 | orchestrator | + VERSION=2024.2 2025-09-10 00:15:43.781931 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-10 00:15:43.785788 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-10 00:15:43.785814 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-10 00:15:43.791325 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-10 00:15:43.792460 | orchestrator | ++ semver latest 7.0.0 2025-09-10 00:15:43.859846 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-10 00:15:43.859898 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-10 00:15:43.859912 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-10 00:15:43.859923 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-10 00:15:43.962480 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-10 00:15:43.968573 | orchestrator | + source /opt/venv/bin/activate 2025-09-10 00:15:43.969691 | orchestrator | ++ deactivate nondestructive 2025-09-10 00:15:43.969715 | orchestrator | ++ '[' -n '' ']' 2025-09-10 00:15:43.969726 | orchestrator | ++ '[' -n '' ']' 2025-09-10 00:15:43.969746 | orchestrator | ++ hash -r 2025-09-10 00:15:43.969844 | orchestrator | ++ '[' -n '' ']' 2025-09-10 00:15:43.969871 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-10 00:15:43.969882 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-10 00:15:43.969894 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-10 00:15:43.969996 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-10 00:15:43.970082 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-10 00:15:43.970106 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-10 00:15:43.970118 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-10 00:15:43.970235 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-10 00:15:43.970252 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-10 00:15:43.970263 | orchestrator | ++ export PATH 2025-09-10 00:15:43.970274 | orchestrator | ++ '[' -n '' ']' 2025-09-10 00:15:43.970422 | orchestrator | ++ '[' -z '' ']' 2025-09-10 00:15:43.970438 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-10 00:15:43.970462 | orchestrator | ++ PS1='(venv) ' 2025-09-10 00:15:43.970473 | orchestrator | ++ export PS1 2025-09-10 00:15:43.970484 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-10 00:15:43.970496 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-10 00:15:43.970507 | orchestrator | ++ hash -r 2025-09-10 00:15:43.970594 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-10 00:15:45.256841 | orchestrator | 2025-09-10 00:15:45.256942 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-10 00:15:45.256959 | orchestrator | 2025-09-10 00:15:45.256972 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-10 00:15:45.818744 | orchestrator | ok: [testbed-manager] 2025-09-10 00:15:45.818836 | orchestrator | 2025-09-10 00:15:45.818852 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-10 00:15:46.804960 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:46.805062 | orchestrator | 2025-09-10 00:15:46.805079 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-10 00:15:46.805091 | orchestrator | 2025-09-10 00:15:46.805103 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:15:49.150643 | orchestrator | ok: [testbed-manager] 2025-09-10 00:15:49.150742 | orchestrator | 2025-09-10 00:15:49.150756 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-10 00:15:49.196243 | orchestrator | ok: [testbed-manager] 2025-09-10 00:15:49.196319 | orchestrator | 2025-09-10 00:15:49.196338 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-10 00:15:49.653002 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:49.653095 | orchestrator | 2025-09-10 00:15:49.653111 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-10 00:15:49.689583 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:15:49.689646 | orchestrator | 2025-09-10 00:15:49.689659 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-10 00:15:50.043227 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:50.043322 | orchestrator | 2025-09-10 00:15:50.043339 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-10 00:15:50.099692 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:15:50.099723 | orchestrator | 2025-09-10 00:15:50.099736 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-10 00:15:50.439921 | orchestrator | ok: [testbed-manager] 2025-09-10 00:15:50.440016 | orchestrator | 2025-09-10 00:15:50.440031 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-10 00:15:50.579525 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:15:50.579573 | orchestrator | 2025-09-10 00:15:50.579586 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-10 00:15:50.579597 | orchestrator | 2025-09-10 00:15:50.579611 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:15:52.332805 | orchestrator | ok: [testbed-manager] 2025-09-10 00:15:52.332902 | orchestrator | 2025-09-10 00:15:52.332918 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-10 00:15:52.451925 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-10 00:15:52.452007 | orchestrator | 2025-09-10 00:15:52.452018 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-10 00:15:52.511141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-10 00:15:52.511248 | orchestrator | 2025-09-10 00:15:52.511261 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-10 00:15:53.640790 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-10 00:15:53.640893 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-10 00:15:53.640908 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-10 00:15:53.640921 | orchestrator | 2025-09-10 00:15:53.640932 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-10 00:15:55.417082 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-10 00:15:55.417236 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-10 00:15:55.417257 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-10 00:15:55.417270 | orchestrator | 2025-09-10 00:15:55.417282 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-10 00:15:56.061516 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-10 00:15:56.061620 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:56.061638 | orchestrator | 2025-09-10 00:15:56.061651 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-10 00:15:56.710734 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-10 00:15:56.710835 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:56.710851 | orchestrator | 2025-09-10 00:15:56.710864 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-10 00:15:56.765059 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:15:56.765121 | orchestrator | 2025-09-10 00:15:56.765139 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-10 00:15:57.119590 | orchestrator | ok: [testbed-manager] 2025-09-10 00:15:57.119680 | orchestrator | 2025-09-10 00:15:57.119695 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-10 00:15:57.191899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-10 00:15:57.191977 | orchestrator | 2025-09-10 00:15:57.191990 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-10 00:15:58.266802 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:58.266897 | orchestrator | 2025-09-10 00:15:58.266913 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-10 00:15:59.086559 | orchestrator | changed: [testbed-manager] 2025-09-10 00:15:59.086655 | orchestrator | 2025-09-10 00:15:59.086669 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-10 00:16:10.719273 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:10.719384 | orchestrator | 2025-09-10 00:16:10.719399 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-10 00:16:10.770567 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:16:10.770623 | orchestrator | 2025-09-10 00:16:10.770638 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-10 00:16:10.770650 | orchestrator | 2025-09-10 00:16:10.770660 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:16:12.471974 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:12.472063 | orchestrator | 2025-09-10 00:16:12.472103 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-10 00:16:12.583761 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-10 00:16:12.583833 | orchestrator | 2025-09-10 00:16:12.583847 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-10 00:16:12.651707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-10 00:16:12.651775 | orchestrator | 2025-09-10 00:16:12.651789 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-10 00:16:16.367433 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:16.367524 | orchestrator | 2025-09-10 00:16:16.367540 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-10 00:16:16.424850 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:16.424899 | orchestrator | 2025-09-10 00:16:16.424914 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-10 00:16:16.566390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-10 00:16:16.566467 | orchestrator | 2025-09-10 00:16:16.566483 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-10 00:16:19.430651 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-10 00:16:19.430758 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-10 00:16:19.430772 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-10 00:16:19.430784 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-10 00:16:19.430796 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-10 00:16:19.430807 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-10 00:16:19.430818 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-10 00:16:19.430829 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-10 00:16:19.430840 | orchestrator | 2025-09-10 00:16:19.430852 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-10 00:16:20.082676 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:20.082767 | orchestrator | 2025-09-10 00:16:20.082783 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-10 00:16:20.716120 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:20.716255 | orchestrator | 2025-09-10 00:16:20.716273 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-10 00:16:20.799452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-10 00:16:20.799527 | orchestrator | 2025-09-10 00:16:20.799541 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-10 00:16:22.022819 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-10 00:16:22.022918 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-10 00:16:22.022934 | orchestrator | 2025-09-10 00:16:22.022946 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-10 00:16:22.695292 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:22.695391 | orchestrator | 2025-09-10 00:16:22.695407 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-10 00:16:22.752614 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:16:22.752680 | orchestrator | 2025-09-10 00:16:22.752694 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-10 00:16:22.833629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-10 00:16:22.833691 | orchestrator | 2025-09-10 00:16:22.833705 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-10 00:16:23.501387 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:23.501478 | orchestrator | 2025-09-10 00:16:23.501491 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-10 00:16:23.569581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-10 00:16:23.569664 | orchestrator | 2025-09-10 00:16:23.569673 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-10 00:16:25.006275 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-10 00:16:25.006374 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-10 00:16:25.006388 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:25.006401 | orchestrator | 2025-09-10 00:16:25.006412 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-10 00:16:25.640752 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:25.640852 | orchestrator | 2025-09-10 00:16:25.640870 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-10 00:16:25.697205 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:16:25.697271 | orchestrator | 2025-09-10 00:16:25.697284 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-10 00:16:25.786414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-10 00:16:25.786477 | orchestrator | 2025-09-10 00:16:25.786491 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-10 00:16:26.311687 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:26.311795 | orchestrator | 2025-09-10 00:16:26.311813 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-10 00:16:26.737104 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:26.737228 | orchestrator | 2025-09-10 00:16:26.737244 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-10 00:16:27.978975 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-10 00:16:27.979073 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-10 00:16:27.979086 | orchestrator | 2025-09-10 00:16:27.979098 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-10 00:16:28.665920 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:28.666073 | orchestrator | 2025-09-10 00:16:28.666093 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-10 00:16:29.080998 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:29.081099 | orchestrator | 2025-09-10 00:16:29.081115 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-10 00:16:29.443689 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:29.443783 | orchestrator | 2025-09-10 00:16:29.443798 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-10 00:16:29.493238 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:16:29.493303 | orchestrator | 2025-09-10 00:16:29.493317 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-10 00:16:29.571588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-10 00:16:29.571654 | orchestrator | 2025-09-10 00:16:29.571669 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-10 00:16:29.617105 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:29.617208 | orchestrator | 2025-09-10 00:16:29.617225 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-10 00:16:31.639434 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-10 00:16:31.639518 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-10 00:16:31.639533 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-10 00:16:31.639546 | orchestrator | 2025-09-10 00:16:31.639558 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-10 00:16:32.359387 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:32.359491 | orchestrator | 2025-09-10 00:16:32.359509 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-10 00:16:33.080871 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:33.080952 | orchestrator | 2025-09-10 00:16:33.080961 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-10 00:16:33.803219 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:33.803331 | orchestrator | 2025-09-10 00:16:33.803348 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-10 00:16:33.889067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-10 00:16:33.889155 | orchestrator | 2025-09-10 00:16:33.889204 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-10 00:16:33.938754 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:33.938809 | orchestrator | 2025-09-10 00:16:33.938822 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-10 00:16:34.660013 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-10 00:16:34.660114 | orchestrator | 2025-09-10 00:16:34.660130 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-10 00:16:34.732540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-10 00:16:34.732633 | orchestrator | 2025-09-10 00:16:34.732646 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-10 00:16:35.473146 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:35.473291 | orchestrator | 2025-09-10 00:16:35.473308 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-10 00:16:36.054715 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:36.054810 | orchestrator | 2025-09-10 00:16:36.054828 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-10 00:16:36.114463 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:16:36.114529 | orchestrator | 2025-09-10 00:16:36.114544 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-10 00:16:36.177609 | orchestrator | ok: [testbed-manager] 2025-09-10 00:16:36.177698 | orchestrator | 2025-09-10 00:16:36.177714 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-10 00:16:37.012260 | orchestrator | changed: [testbed-manager] 2025-09-10 00:16:37.012366 | orchestrator | 2025-09-10 00:16:37.012381 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-10 00:18:17.238114 | orchestrator | changed: [testbed-manager] 2025-09-10 00:18:17.238233 | orchestrator | 2025-09-10 00:18:17.238251 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-10 00:18:18.242219 | orchestrator | ok: [testbed-manager] 2025-09-10 00:18:18.242314 | orchestrator | 2025-09-10 00:18:18.242329 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-10 00:18:18.302692 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:18:18.302775 | orchestrator | 2025-09-10 00:18:18.302791 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-10 00:18:20.935414 | orchestrator | changed: [testbed-manager] 2025-09-10 00:18:20.935471 | orchestrator | 2025-09-10 00:18:20.935485 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-10 00:18:20.993479 | orchestrator | ok: [testbed-manager] 2025-09-10 00:18:20.993550 | orchestrator | 2025-09-10 00:18:20.993560 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-10 00:18:20.993568 | orchestrator | 2025-09-10 00:18:20.993574 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-10 00:18:21.037307 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:18:21.037364 | orchestrator | 2025-09-10 00:18:21.037377 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-10 00:19:21.091570 | orchestrator | Pausing for 60 seconds 2025-09-10 00:19:21.091659 | orchestrator | changed: [testbed-manager] 2025-09-10 00:19:21.091671 | orchestrator | 2025-09-10 00:19:21.091682 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-10 00:19:24.738804 | orchestrator | changed: [testbed-manager] 2025-09-10 00:19:24.738896 | orchestrator | 2025-09-10 00:19:24.738909 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-10 00:20:06.477249 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-10 00:20:06.477350 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-10 00:20:06.477367 | orchestrator | changed: [testbed-manager] 2025-09-10 00:20:06.477405 | orchestrator | 2025-09-10 00:20:06.477417 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-10 00:20:16.580055 | orchestrator | changed: [testbed-manager] 2025-09-10 00:20:16.580134 | orchestrator | 2025-09-10 00:20:16.580149 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-10 00:20:16.668233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-10 00:20:16.668300 | orchestrator | 2025-09-10 00:20:16.668312 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-10 00:20:16.668324 | orchestrator | 2025-09-10 00:20:16.668336 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-10 00:20:16.716938 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:20:16.716992 | orchestrator | 2025-09-10 00:20:16.717013 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:20:16.717034 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-10 00:20:16.717053 | orchestrator | 2025-09-10 00:20:16.831739 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-10 00:20:16.831851 | orchestrator | + deactivate 2025-09-10 00:20:16.831869 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-10 00:20:16.831883 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-10 00:20:16.831895 | orchestrator | + export PATH 2025-09-10 00:20:16.831906 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-10 00:20:16.831918 | orchestrator | + '[' -n '' ']' 2025-09-10 00:20:16.831930 | orchestrator | + hash -r 2025-09-10 00:20:16.831963 | orchestrator | + '[' -n '' ']' 2025-09-10 00:20:16.831977 | orchestrator | + unset VIRTUAL_ENV 2025-09-10 00:20:16.831996 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-10 00:20:16.832015 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-10 00:20:16.832033 | orchestrator | + unset -f deactivate 2025-09-10 00:20:16.832051 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-10 00:20:16.842396 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-10 00:20:16.842435 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-10 00:20:16.842447 | orchestrator | + local max_attempts=60 2025-09-10 00:20:16.842459 | orchestrator | + local name=ceph-ansible 2025-09-10 00:20:16.842471 | orchestrator | + local attempt_num=1 2025-09-10 00:20:16.843725 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:20:16.877201 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:20:16.877253 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-10 00:20:16.877265 | orchestrator | + local max_attempts=60 2025-09-10 00:20:16.877326 | orchestrator | + local name=kolla-ansible 2025-09-10 00:20:16.877340 | orchestrator | + local attempt_num=1 2025-09-10 00:20:16.878300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-10 00:20:16.913223 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:20:16.913264 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-10 00:20:16.913276 | orchestrator | + local max_attempts=60 2025-09-10 00:20:16.913288 | orchestrator | + local name=osism-ansible 2025-09-10 00:20:16.913299 | orchestrator | + local attempt_num=1 2025-09-10 00:20:16.913498 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-10 00:20:16.941305 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:20:16.941340 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-10 00:20:16.941352 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-10 00:20:17.728538 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-10 00:20:17.962413 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-10 00:20:17.962481 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962494 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962527 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-10 00:20:17.962540 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-10 00:20:17.962559 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962570 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962580 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-10 00:20:17.962590 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962599 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-10 00:20:17.962609 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962618 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-10 00:20:17.962628 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962698 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-10 00:20:17.962712 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.962721 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-10 00:20:17.970420 | orchestrator | ++ semver latest 7.0.0 2025-09-10 00:20:18.031936 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-10 00:20:18.031965 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-10 00:20:18.031978 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-10 00:20:18.036455 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-10 00:20:30.265331 | orchestrator | 2025-09-10 00:20:30 | INFO  | Task 78b89be6-285f-4e10-b35b-dd971f8f66b6 (resolvconf) was prepared for execution. 2025-09-10 00:20:30.265441 | orchestrator | 2025-09-10 00:20:30 | INFO  | It takes a moment until task 78b89be6-285f-4e10-b35b-dd971f8f66b6 (resolvconf) has been started and output is visible here. 2025-09-10 00:20:44.046095 | orchestrator | 2025-09-10 00:20:44.046245 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-10 00:20:44.046258 | orchestrator | 2025-09-10 00:20:44.046266 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:20:44.046293 | orchestrator | Wednesday 10 September 2025 00:20:34 +0000 (0:00:00.148) 0:00:00.148 *** 2025-09-10 00:20:44.046300 | orchestrator | ok: [testbed-manager] 2025-09-10 00:20:44.046308 | orchestrator | 2025-09-10 00:20:44.046315 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-10 00:20:44.046322 | orchestrator | Wednesday 10 September 2025 00:20:37 +0000 (0:00:03.823) 0:00:03.972 *** 2025-09-10 00:20:44.046329 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:20:44.046336 | orchestrator | 2025-09-10 00:20:44.046342 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-10 00:20:44.046349 | orchestrator | Wednesday 10 September 2025 00:20:38 +0000 (0:00:00.063) 0:00:04.035 *** 2025-09-10 00:20:44.046355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-10 00:20:44.046363 | orchestrator | 2025-09-10 00:20:44.046369 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-10 00:20:44.046375 | orchestrator | Wednesday 10 September 2025 00:20:38 +0000 (0:00:00.091) 0:00:04.127 *** 2025-09-10 00:20:44.046382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-10 00:20:44.046388 | orchestrator | 2025-09-10 00:20:44.046394 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-10 00:20:44.046400 | orchestrator | Wednesday 10 September 2025 00:20:38 +0000 (0:00:00.085) 0:00:04.212 *** 2025-09-10 00:20:44.046406 | orchestrator | ok: [testbed-manager] 2025-09-10 00:20:44.046412 | orchestrator | 2025-09-10 00:20:44.046419 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-10 00:20:44.046425 | orchestrator | Wednesday 10 September 2025 00:20:39 +0000 (0:00:01.105) 0:00:05.318 *** 2025-09-10 00:20:44.046431 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:20:44.046437 | orchestrator | 2025-09-10 00:20:44.046443 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-10 00:20:44.046450 | orchestrator | Wednesday 10 September 2025 00:20:39 +0000 (0:00:00.063) 0:00:05.382 *** 2025-09-10 00:20:44.046456 | orchestrator | ok: [testbed-manager] 2025-09-10 00:20:44.046462 | orchestrator | 2025-09-10 00:20:44.046468 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-10 00:20:44.046474 | orchestrator | Wednesday 10 September 2025 00:20:39 +0000 (0:00:00.485) 0:00:05.867 *** 2025-09-10 00:20:44.046480 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:20:44.046487 | orchestrator | 2025-09-10 00:20:44.046493 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-10 00:20:44.046500 | orchestrator | Wednesday 10 September 2025 00:20:39 +0000 (0:00:00.082) 0:00:05.950 *** 2025-09-10 00:20:44.046506 | orchestrator | changed: [testbed-manager] 2025-09-10 00:20:44.046512 | orchestrator | 2025-09-10 00:20:44.046519 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-10 00:20:44.046525 | orchestrator | Wednesday 10 September 2025 00:20:40 +0000 (0:00:00.551) 0:00:06.502 *** 2025-09-10 00:20:44.046531 | orchestrator | changed: [testbed-manager] 2025-09-10 00:20:44.046537 | orchestrator | 2025-09-10 00:20:44.046543 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-10 00:20:44.046549 | orchestrator | Wednesday 10 September 2025 00:20:41 +0000 (0:00:01.106) 0:00:07.609 *** 2025-09-10 00:20:44.046555 | orchestrator | ok: [testbed-manager] 2025-09-10 00:20:44.046561 | orchestrator | 2025-09-10 00:20:44.046567 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-10 00:20:44.046574 | orchestrator | Wednesday 10 September 2025 00:20:42 +0000 (0:00:00.967) 0:00:08.576 *** 2025-09-10 00:20:44.046587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-10 00:20:44.046598 | orchestrator | 2025-09-10 00:20:44.046606 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-10 00:20:44.046613 | orchestrator | Wednesday 10 September 2025 00:20:42 +0000 (0:00:00.080) 0:00:08.657 *** 2025-09-10 00:20:44.046620 | orchestrator | changed: [testbed-manager] 2025-09-10 00:20:44.046627 | orchestrator | 2025-09-10 00:20:44.046634 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:20:44.046643 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:20:44.046650 | orchestrator | 2025-09-10 00:20:44.046658 | orchestrator | 2025-09-10 00:20:44.046665 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:20:44.046672 | orchestrator | Wednesday 10 September 2025 00:20:43 +0000 (0:00:01.167) 0:00:09.824 *** 2025-09-10 00:20:44.046679 | orchestrator | =============================================================================== 2025-09-10 00:20:44.046686 | orchestrator | Gathering Facts --------------------------------------------------------- 3.82s 2025-09-10 00:20:44.046693 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-09-10 00:20:44.046700 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-09-10 00:20:44.046708 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-09-10 00:20:44.046715 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-09-10 00:20:44.046723 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-09-10 00:20:44.046742 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-09-10 00:20:44.046750 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-10 00:20:44.046757 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-09-10 00:20:44.046764 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-10 00:20:44.046771 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-10 00:20:44.046778 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-10 00:20:44.046786 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-10 00:20:44.331013 | orchestrator | + osism apply sshconfig 2025-09-10 00:20:56.406692 | orchestrator | 2025-09-10 00:20:56 | INFO  | Task 3d165464-2bc7-403f-ad18-0319ae2890bc (sshconfig) was prepared for execution. 2025-09-10 00:20:56.406802 | orchestrator | 2025-09-10 00:20:56 | INFO  | It takes a moment until task 3d165464-2bc7-403f-ad18-0319ae2890bc (sshconfig) has been started and output is visible here. 2025-09-10 00:21:08.109114 | orchestrator | 2025-09-10 00:21:08.109270 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-10 00:21:08.109288 | orchestrator | 2025-09-10 00:21:08.109300 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-10 00:21:08.109312 | orchestrator | Wednesday 10 September 2025 00:21:00 +0000 (0:00:00.177) 0:00:00.177 *** 2025-09-10 00:21:08.109323 | orchestrator | ok: [testbed-manager] 2025-09-10 00:21:08.109336 | orchestrator | 2025-09-10 00:21:08.109360 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-10 00:21:08.109372 | orchestrator | Wednesday 10 September 2025 00:21:00 +0000 (0:00:00.557) 0:00:00.734 *** 2025-09-10 00:21:08.109383 | orchestrator | changed: [testbed-manager] 2025-09-10 00:21:08.109396 | orchestrator | 2025-09-10 00:21:08.109407 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-10 00:21:08.109420 | orchestrator | Wednesday 10 September 2025 00:21:01 +0000 (0:00:00.527) 0:00:01.262 *** 2025-09-10 00:21:08.109431 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-10 00:21:08.109442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-10 00:21:08.109475 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-10 00:21:08.109487 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-10 00:21:08.109497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-10 00:21:08.109520 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-10 00:21:08.109532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-10 00:21:08.109543 | orchestrator | 2025-09-10 00:21:08.109554 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-10 00:21:08.109565 | orchestrator | Wednesday 10 September 2025 00:21:07 +0000 (0:00:05.791) 0:00:07.053 *** 2025-09-10 00:21:08.109575 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:21:08.109586 | orchestrator | 2025-09-10 00:21:08.109597 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-10 00:21:08.109608 | orchestrator | Wednesday 10 September 2025 00:21:07 +0000 (0:00:00.059) 0:00:07.113 *** 2025-09-10 00:21:08.109618 | orchestrator | changed: [testbed-manager] 2025-09-10 00:21:08.109629 | orchestrator | 2025-09-10 00:21:08.109640 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:21:08.109652 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:21:08.109664 | orchestrator | 2025-09-10 00:21:08.109677 | orchestrator | 2025-09-10 00:21:08.109690 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:21:08.109703 | orchestrator | Wednesday 10 September 2025 00:21:07 +0000 (0:00:00.594) 0:00:07.707 *** 2025-09-10 00:21:08.109716 | orchestrator | =============================================================================== 2025-09-10 00:21:08.109728 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.79s 2025-09-10 00:21:08.109741 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-09-10 00:21:08.109753 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-09-10 00:21:08.109765 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-09-10 00:21:08.109778 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-10 00:21:08.394303 | orchestrator | + osism apply known-hosts 2025-09-10 00:21:20.501395 | orchestrator | 2025-09-10 00:21:20 | INFO  | Task d357782c-69f4-40d5-9643-64b28b864ade (known-hosts) was prepared for execution. 2025-09-10 00:21:20.501510 | orchestrator | 2025-09-10 00:21:20 | INFO  | It takes a moment until task d357782c-69f4-40d5-9643-64b28b864ade (known-hosts) has been started and output is visible here. 2025-09-10 00:21:37.293374 | orchestrator | 2025-09-10 00:21:37.293479 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-10 00:21:37.293494 | orchestrator | 2025-09-10 00:21:37.293506 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-10 00:21:37.293519 | orchestrator | Wednesday 10 September 2025 00:21:24 +0000 (0:00:00.189) 0:00:00.189 *** 2025-09-10 00:21:37.293530 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-10 00:21:37.293542 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-10 00:21:37.293553 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-10 00:21:37.293564 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-10 00:21:37.293574 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-10 00:21:37.293585 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-10 00:21:37.293596 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-10 00:21:37.293606 | orchestrator | 2025-09-10 00:21:37.293617 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-10 00:21:37.293629 | orchestrator | Wednesday 10 September 2025 00:21:30 +0000 (0:00:06.096) 0:00:06.286 *** 2025-09-10 00:21:37.293662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-10 00:21:37.293675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-10 00:21:37.293686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-10 00:21:37.293697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-10 00:21:37.293707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-10 00:21:37.293728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-10 00:21:37.293740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-10 00:21:37.293751 | orchestrator | 2025-09-10 00:21:37.293762 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:37.293773 | orchestrator | Wednesday 10 September 2025 00:21:30 +0000 (0:00:00.179) 0:00:06.466 *** 2025-09-10 00:21:37.293784 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILniv2jqOpdfC78rlDlclWAAQwYu92kNDhxcKJ4hZGlP) 2025-09-10 00:21:37.293799 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Iw8kGuMC2tkr+7Twl18MCw3G0BjPMgh+c2b95mXP9uql1ELP1Png3SIU72yh24V4HVHM7/XqvFP/1Kutc2ZydI+BsS5nAmOvZJAuLONw+QFBypDcUqqdCthc3INTnFFGC0WaonUTg964YmJD76rec4B670X+3Q/bYRoY2/HwzPCtogfA+gSqrRe3/nZRF0mvO9c7apjhAy0rK3TDP8yqrhHKT7MefLCPUXKn/JpJuK1ARZUrpHBQllzsqfF/JsEqmgYf0psiUEGrNFH+M5sS5+gkSbGbfLbJET16vbr3c9hhU7TRoHa6vADnQ52Y3chJJyMGP7UKm0E8vgTHL5BFUK/P9BhDtNEUX3wIQ4EolsykFA8xe1/tKp8yH1oU3jkmX3e32N/R9YAzcLZajBKc8PKVWGaSyfJK/k34xiPTBPVqQWnxxkiLCpaKBDjqD3MOYmqgZYGClaMp8AQN0Sips395LXu2iqWxYx6XC+0mjyih+7Bdg1unR2cLEDZktDU=) 2025-09-10 00:21:37.293814 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ2cBKDXeRf/Mp8jsjGsNyeeNKOiP9gUzRU+8ie3RNHPHNhphtCsraH9Ftr6IXUFXtu978cMSE+/nA+ae+SZpMc=) 2025-09-10 00:21:37.293826 | orchestrator | 2025-09-10 00:21:37.293837 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:37.293848 | orchestrator | Wednesday 10 September 2025 00:21:32 +0000 (0:00:01.210) 0:00:07.677 *** 2025-09-10 00:21:37.293878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCM0kSjF7kHR7ntIRy1udjJnZbXjW7kJl216XkoS0S7KSpq3S4WBnn6U6/GBKw2EH8CBRhWQvzZwln7bt6PPOAXXZelCqGeEiWMZeKCyOzUw3JiW6Z4O/yv4Mc0DK2wLHovp+FkYiXsONNZJH4yehSYOJ7K3PQK+4kWi1bb6LR2nqQZYsmzyyGo9zfhIgJQwXTLn890kvH1a97m6gO3sXYZ1xGaysnwMmqiOnDtyqWpoID5d+BBYUz7mJSYFul9blIyzWXCbN9dS8Scgd46LY3Wq9HSpIozsCQgqk9RsLeONQt4e6LlFWMGa1ZRjvyRNIwUs/ApVTVI12Q5OdwLjZ8Pkp6SRSJxL4sT2+OPV1Si9keSfWc2UrF+ZKnYLTymzl3OWGheGTYIIOGs3AXNsevRfmediyEr0yK9iP4pbEI9T1BZWepfKx6SUPC4mJif5aiYUthcfAcWMX47v7uD3Lvb+1FWO8bwdnArOppPwOH4pT84jUL3j0pHddD/MolGQMU=) 2025-09-10 00:21:37.293891 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOPE+NznXq9+RRgBYyzweYYJvG4/opqvGBkUrxc66/IMFaKFeTZBDFqmTE2o0qWl9Brv/GlRtUC7yLNpvfYtG4o=) 2025-09-10 00:21:37.293902 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFugFam8jDXDt2wxyWyEsDwc1Uzi/uRxeKye64T/AIz1) 2025-09-10 00:21:37.293923 | orchestrator | 2025-09-10 00:21:37.293936 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:37.293948 | orchestrator | Wednesday 10 September 2025 00:21:33 +0000 (0:00:01.051) 0:00:08.728 *** 2025-09-10 00:21:37.293962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf6o7S12hL15dle+4gV/WbAPoMdkDPYn/2SynBNOtBW5v4n6uFn9J3dpFLw/+uWK7ufte2jmXRRapGdI7BicQjBnW1Rw/jDR3UaePiR8/bwFv2CqR4LIi4ZradBBVoQFXdrMGveN0AGIORTf9xlSCOyki3c8wKJFqi45TX+T6U8hLpfrfX28pvMfrARMboLxguf9CCCa+sA+FLtMmsOXobznpw5SNZkhERQUQupFt9UQQYHTn8yYmrad1RR1VimmIS/fWU+n+ZRip8v183xjNMxdFAG0EuRV6pcNmF/aW91EnmDZT9FlcJ0K2DQXpB2NiLpIFGTOxvbpyMYCLreM2cXbaB9HGe3H8RBjWiQ9QoTk7duByvk8DoDovEJ5dBSIe7/I1ZcwBznebPXw1vz00wCvyzWIdAW4MK4zZIaOqKpVmiU2ZdvXeqNcl71cmTKvU1CvuEYFjNr9UAgelOWCtKtg9+V9m4BG1w/7UlrvL5/AKYseFqvNFnAr91YA9P4os=) 2025-09-10 00:21:37.293975 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBNuTCmvP2RPnV+H9CnMvcEpjpwCZXDTEsV1eFKpFuB515b4TIcjUuQVLWjWu+iqE+YF06qysDxWrtJHdaCv1F4=) 2025-09-10 00:21:37.293987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFBR2m7/fU/cqvrwItBCzfRNya0S7+jlK0/+RICZsEhv) 2025-09-10 00:21:37.294000 | orchestrator | 2025-09-10 00:21:37.294013 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:37.294081 | orchestrator | Wednesday 10 September 2025 00:21:34 +0000 (0:00:01.043) 0:00:09.772 *** 2025-09-10 00:21:37.294160 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2GpGjH9XTLiB+JEgvaNCVhTeoFGT/5yRfl0qFc7khc/OYiWm36Sd+FYPfjwokFufhqmec6FYhsT9WIw+v6a1ZRYNxRfQTWKE3/HQ94x+rwHUGQG4foI7t3lKr4oV77vWLtntbcIHUTabCoKDZdFzpwnkr+v80ichJvfWdVfey6ntCbT9epwSbcVB8Ys+SG/O9dHiLNBiEJfXotVvEKV/gsaG66Sy9lCOmDVV0szvi5tDTug7AE3v37h2stM/L/nuV9/S0ZgXOmJAhpEFZkFIPzTs/Wo4EuLUkj1zaRrH/N21MfF/cBjiIRFu0YxeguxbYDGMwg/BVQfssXKcURICkFlIC42g+NrfeY8SnVykeqEFtxLHFIT6NYtKlgeyNyckWBAUnTTlJ7l4jMI1Dvtxwjt6CPmGICq/F5i4edpX/Ca5LnO7Nspu1gsCjrjHPPGmi0l172iA3kDxA4XCq7YpiP5GY9UeqAYjMTKpuMgwBDmpz78Sx7fnK0mQS9gP0Yb8=) 2025-09-10 00:21:37.294174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC5Qy17693FccxaCqHUiUGwCANl9w6CmnQX0UEd3tOQ5I3onxATxvd19554uWmH6Rdlq8Yxedybf/mSjuu1YZ9o=) 2025-09-10 00:21:37.294187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHAMKegq0QVvTPVw1sdpW20FvRcHUJzTB9XVh8pWOztx) 2025-09-10 00:21:37.294223 | orchestrator | 2025-09-10 00:21:37.294236 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:37.294249 | orchestrator | Wednesday 10 September 2025 00:21:35 +0000 (0:00:01.042) 0:00:10.814 *** 2025-09-10 00:21:37.294262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrCsFNKkPxWMxZ0n5OvRL5t2oz27i3fR7XKdaUQBxeULjKn4kovTYuM8kyLrmXP792tkv2ekq+lKGTb1Mwbh6XbeHs/C2Ve7/1JVu6FM5Iar6VhnA5ebTvG1I9GduK9tMQmymm4wY2InJDmtMtDbgnPYhT/5OuzgzzSPp8C8WwY2hw/8WWltTLLmiL8h7LBUt8MB/EzeU31RT3GAQtl/R89xebCvnK2zeXlM3EhQtOKOUtzbvC7VGmkVRuAKlJMJsJQ+qrYuE+voaOAL8DI1/ukzPOhmvZRLwm30elXGJ1b3Ht+CL7nnendIf2d9oU3U++RghDPtf7q7T04L9iW0MY/HKxuJat0OQcorJ5cd/2dzAjHfgcvbtfcSUy6DXDwuJXCARgTexfh3xmAA3TGtjK6J6IRB9VYbTn9+M6iBYb+IIF6E4IyUPQBQuDtHCOSJfmX3cZM6KC/mDi7Yj0eDQsWLvnRZac6GcFDDQKxYP3hREw6tP4j3T5jI1qLce9ofc=) 2025-09-10 00:21:37.294274 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHe7up9HNrDfYpG5akvidIA2DcFyQh3shIAX0ywVmMYKRVsMzkut+p09byoPoKykHZJ+9YaQxpLl+9QhPTm9RXU=) 2025-09-10 00:21:37.294285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK41jM6hmaD5ZnyMQHqc2EsE2EWhONEt2nB97dMD97zp) 2025-09-10 00:21:37.294304 | orchestrator | 2025-09-10 00:21:37.294316 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:37.294327 | orchestrator | Wednesday 10 September 2025 00:21:36 +0000 (0:00:01.070) 0:00:11.885 *** 2025-09-10 00:21:37.294348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQk9LCVed2JNcBQGPHUBsQTjrw149bkWH3Ek8zT46Xzuuit3FOcoc25r7HbASwS2GYLNdZyY1HLLd8bEGUsQtN7CfAeldeGelDSt49EBCTghN7myV5wwNI6uw6xVirSQFpvqWyy8PlPkuAQn/QNZbmkYgsnKb083AFKlvtDcSCt0AECCtcYBh5mv2yUeWW/MrNFlpZzlGEba3OZEDuakBSkkbu0dE7+Co0vnJ5R6GaT1pZDlrfa26FJkw7YqI6GzXSs/tC2hbeblzf5Yj+p5bd5IFoKnmav3lkQwIwnujOi3AmkIGWL/W+many9nZR4EtE1dVAt2xptkEinbt6KY03HeZeQcUvSgGDGLxiqAye/GZOC+tH+z8a3LmxS0s1pzfyhIyOUeTO/B/ndGZzsjOA8Vijd48NCI2nO6m+bxGK34/2EmyZiChWCASi7bGIpxNDnQxl7GaYyaYJPDlA3MbBnbxXvaTgBcSpt6HexZlZQpi04wbTLaTgkhVeHexWVK0=) 2025-09-10 00:21:48.200905 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBESuZ3HgOnUQTx0jZVFL4sdYDNjn/dPA+OaccR9vMhrsFsuefrK9wwfxxH6iSl7SRQjScYHfBB5hdPD0QdnLwqQ=) 2025-09-10 00:21:48.201027 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIyq4o/q5+TqbaUfdaOo2Dn+eK/smP93Px2oKRkFDPeE) 2025-09-10 00:21:48.201049 | orchestrator | 2025-09-10 00:21:48.201063 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:48.201078 | orchestrator | Wednesday 10 September 2025 00:21:37 +0000 (0:00:01.065) 0:00:12.950 *** 2025-09-10 00:21:48.201094 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdk32ROb9++CV4daPoAJ7njw/8Jel28GIHqMJDV6sNXgVofEnVP/d/CcOTQYwqikijqH55Q3xssxlWdTVCBI3wHkUawquHBOal0R7MqAFdk/9YV+ylwcmi/4ucyZCxT2EfpW+fcGSYX5W7573aWjCU7YrQ7ZwqY1iUuuha+6pcGdgF6lSD+T0MvCKqXxxFgp0gjRK0y8EhjLTA03WXxFOBOrCj+/PfYm/XntKjkqkCKv3t+4lRdVLQlbCb5nsiodOetEW9UZZ/0tuWsp+ZCJPgr/tDsMTDulTN9Hz1UNSCdXRnAPSRg5dedVAbHtmZnUHZ/CGD1WjqH1gXquAe9phbSWOLxjp8DZlNtdNGdPRm8y9zPpGn1GYYe+3Ztcc/yDqWinwwzfbGQTYqtOFlugIxBgBaiijv9vSIK/rnPTVsUez23TrDJqEPIKqzF2KuITX1+9vZbtJ7aO6aVE/qTMcrU9VQ8Ie9lL0oQGnSlg248RDUYxf+fnTxy++ULCrAksU=) 2025-09-10 00:21:48.201111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWvrVq9Gtz9lECvXLV+Ga9UPbm37a34Ka/K5GfcT9uHQT7OEiaGBLdGHeN/Ooj5/YnkdvM5fus/R4KtFCKMPqw=) 2025-09-10 00:21:48.201124 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOcTrW0lf0iqfr+rDYJCwGoJsc2oTcKTUcR0P5yCCYWg) 2025-09-10 00:21:48.201137 | orchestrator | 2025-09-10 00:21:48.201150 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-10 00:21:48.201164 | orchestrator | Wednesday 10 September 2025 00:21:38 +0000 (0:00:01.084) 0:00:14.035 *** 2025-09-10 00:21:48.201177 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-10 00:21:48.201190 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-10 00:21:48.201231 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-10 00:21:48.201244 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-10 00:21:48.201258 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-10 00:21:48.201271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-10 00:21:48.201284 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-10 00:21:48.201296 | orchestrator | 2025-09-10 00:21:48.201309 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-10 00:21:48.201382 | orchestrator | Wednesday 10 September 2025 00:21:43 +0000 (0:00:05.284) 0:00:19.320 *** 2025-09-10 00:21:48.201413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-10 00:21:48.201428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-10 00:21:48.201466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-10 00:21:48.201480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-10 00:21:48.201494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-10 00:21:48.201507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-10 00:21:48.201518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-10 00:21:48.201526 | orchestrator | 2025-09-10 00:21:48.201534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:48.201542 | orchestrator | Wednesday 10 September 2025 00:21:43 +0000 (0:00:00.163) 0:00:19.483 *** 2025-09-10 00:21:48.201550 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILniv2jqOpdfC78rlDlclWAAQwYu92kNDhxcKJ4hZGlP) 2025-09-10 00:21:48.201583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Iw8kGuMC2tkr+7Twl18MCw3G0BjPMgh+c2b95mXP9uql1ELP1Png3SIU72yh24V4HVHM7/XqvFP/1Kutc2ZydI+BsS5nAmOvZJAuLONw+QFBypDcUqqdCthc3INTnFFGC0WaonUTg964YmJD76rec4B670X+3Q/bYRoY2/HwzPCtogfA+gSqrRe3/nZRF0mvO9c7apjhAy0rK3TDP8yqrhHKT7MefLCPUXKn/JpJuK1ARZUrpHBQllzsqfF/JsEqmgYf0psiUEGrNFH+M5sS5+gkSbGbfLbJET16vbr3c9hhU7TRoHa6vADnQ52Y3chJJyMGP7UKm0E8vgTHL5BFUK/P9BhDtNEUX3wIQ4EolsykFA8xe1/tKp8yH1oU3jkmX3e32N/R9YAzcLZajBKc8PKVWGaSyfJK/k34xiPTBPVqQWnxxkiLCpaKBDjqD3MOYmqgZYGClaMp8AQN0Sips395LXu2iqWxYx6XC+0mjyih+7Bdg1unR2cLEDZktDU=) 2025-09-10 00:21:48.201593 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ2cBKDXeRf/Mp8jsjGsNyeeNKOiP9gUzRU+8ie3RNHPHNhphtCsraH9Ftr6IXUFXtu978cMSE+/nA+ae+SZpMc=) 2025-09-10 00:21:48.201601 | orchestrator | 2025-09-10 00:21:48.201609 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:48.201617 | orchestrator | Wednesday 10 September 2025 00:21:44 +0000 (0:00:01.089) 0:00:20.573 *** 2025-09-10 00:21:48.201625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOPE+NznXq9+RRgBYyzweYYJvG4/opqvGBkUrxc66/IMFaKFeTZBDFqmTE2o0qWl9Brv/GlRtUC7yLNpvfYtG4o=) 2025-09-10 00:21:48.201634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCM0kSjF7kHR7ntIRy1udjJnZbXjW7kJl216XkoS0S7KSpq3S4WBnn6U6/GBKw2EH8CBRhWQvzZwln7bt6PPOAXXZelCqGeEiWMZeKCyOzUw3JiW6Z4O/yv4Mc0DK2wLHovp+FkYiXsONNZJH4yehSYOJ7K3PQK+4kWi1bb6LR2nqQZYsmzyyGo9zfhIgJQwXTLn890kvH1a97m6gO3sXYZ1xGaysnwMmqiOnDtyqWpoID5d+BBYUz7mJSYFul9blIyzWXCbN9dS8Scgd46LY3Wq9HSpIozsCQgqk9RsLeONQt4e6LlFWMGa1ZRjvyRNIwUs/ApVTVI12Q5OdwLjZ8Pkp6SRSJxL4sT2+OPV1Si9keSfWc2UrF+ZKnYLTymzl3OWGheGTYIIOGs3AXNsevRfmediyEr0yK9iP4pbEI9T1BZWepfKx6SUPC4mJif5aiYUthcfAcWMX47v7uD3Lvb+1FWO8bwdnArOppPwOH4pT84jUL3j0pHddD/MolGQMU=) 2025-09-10 00:21:48.201642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFugFam8jDXDt2wxyWyEsDwc1Uzi/uRxeKye64T/AIz1) 2025-09-10 00:21:48.201650 | orchestrator | 2025-09-10 00:21:48.201657 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:48.201665 | orchestrator | Wednesday 10 September 2025 00:21:45 +0000 (0:00:01.087) 0:00:21.661 *** 2025-09-10 00:21:48.201680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf6o7S12hL15dle+4gV/WbAPoMdkDPYn/2SynBNOtBW5v4n6uFn9J3dpFLw/+uWK7ufte2jmXRRapGdI7BicQjBnW1Rw/jDR3UaePiR8/bwFv2CqR4LIi4ZradBBVoQFXdrMGveN0AGIORTf9xlSCOyki3c8wKJFqi45TX+T6U8hLpfrfX28pvMfrARMboLxguf9CCCa+sA+FLtMmsOXobznpw5SNZkhERQUQupFt9UQQYHTn8yYmrad1RR1VimmIS/fWU+n+ZRip8v183xjNMxdFAG0EuRV6pcNmF/aW91EnmDZT9FlcJ0K2DQXpB2NiLpIFGTOxvbpyMYCLreM2cXbaB9HGe3H8RBjWiQ9QoTk7duByvk8DoDovEJ5dBSIe7/I1ZcwBznebPXw1vz00wCvyzWIdAW4MK4zZIaOqKpVmiU2ZdvXeqNcl71cmTKvU1CvuEYFjNr9UAgelOWCtKtg9+V9m4BG1w/7UlrvL5/AKYseFqvNFnAr91YA9P4os=) 2025-09-10 00:21:48.201688 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBNuTCmvP2RPnV+H9CnMvcEpjpwCZXDTEsV1eFKpFuB515b4TIcjUuQVLWjWu+iqE+YF06qysDxWrtJHdaCv1F4=) 2025-09-10 00:21:48.201696 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFBR2m7/fU/cqvrwItBCzfRNya0S7+jlK0/+RICZsEhv) 2025-09-10 00:21:48.201704 | orchestrator | 2025-09-10 00:21:48.201712 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:48.201720 | orchestrator | Wednesday 10 September 2025 00:21:47 +0000 (0:00:01.084) 0:00:22.745 *** 2025-09-10 00:21:48.201733 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2GpGjH9XTLiB+JEgvaNCVhTeoFGT/5yRfl0qFc7khc/OYiWm36Sd+FYPfjwokFufhqmec6FYhsT9WIw+v6a1ZRYNxRfQTWKE3/HQ94x+rwHUGQG4foI7t3lKr4oV77vWLtntbcIHUTabCoKDZdFzpwnkr+v80ichJvfWdVfey6ntCbT9epwSbcVB8Ys+SG/O9dHiLNBiEJfXotVvEKV/gsaG66Sy9lCOmDVV0szvi5tDTug7AE3v37h2stM/L/nuV9/S0ZgXOmJAhpEFZkFIPzTs/Wo4EuLUkj1zaRrH/N21MfF/cBjiIRFu0YxeguxbYDGMwg/BVQfssXKcURICkFlIC42g+NrfeY8SnVykeqEFtxLHFIT6NYtKlgeyNyckWBAUnTTlJ7l4jMI1Dvtxwjt6CPmGICq/F5i4edpX/Ca5LnO7Nspu1gsCjrjHPPGmi0l172iA3kDxA4XCq7YpiP5GY9UeqAYjMTKpuMgwBDmpz78Sx7fnK0mQS9gP0Yb8=) 2025-09-10 00:21:48.201741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC5Qy17693FccxaCqHUiUGwCANl9w6CmnQX0UEd3tOQ5I3onxATxvd19554uWmH6Rdlq8Yxedybf/mSjuu1YZ9o=) 2025-09-10 00:21:48.201757 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHAMKegq0QVvTPVw1sdpW20FvRcHUJzTB9XVh8pWOztx) 2025-09-10 00:21:52.633027 | orchestrator | 2025-09-10 00:21:52.633133 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:52.633151 | orchestrator | Wednesday 10 September 2025 00:21:48 +0000 (0:00:01.107) 0:00:23.852 *** 2025-09-10 00:21:52.633165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHe7up9HNrDfYpG5akvidIA2DcFyQh3shIAX0ywVmMYKRVsMzkut+p09byoPoKykHZJ+9YaQxpLl+9QhPTm9RXU=) 2025-09-10 00:21:52.633182 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrCsFNKkPxWMxZ0n5OvRL5t2oz27i3fR7XKdaUQBxeULjKn4kovTYuM8kyLrmXP792tkv2ekq+lKGTb1Mwbh6XbeHs/C2Ve7/1JVu6FM5Iar6VhnA5ebTvG1I9GduK9tMQmymm4wY2InJDmtMtDbgnPYhT/5OuzgzzSPp8C8WwY2hw/8WWltTLLmiL8h7LBUt8MB/EzeU31RT3GAQtl/R89xebCvnK2zeXlM3EhQtOKOUtzbvC7VGmkVRuAKlJMJsJQ+qrYuE+voaOAL8DI1/ukzPOhmvZRLwm30elXGJ1b3Ht+CL7nnendIf2d9oU3U++RghDPtf7q7T04L9iW0MY/HKxuJat0OQcorJ5cd/2dzAjHfgcvbtfcSUy6DXDwuJXCARgTexfh3xmAA3TGtjK6J6IRB9VYbTn9+M6iBYb+IIF6E4IyUPQBQuDtHCOSJfmX3cZM6KC/mDi7Yj0eDQsWLvnRZac6GcFDDQKxYP3hREw6tP4j3T5jI1qLce9ofc=) 2025-09-10 00:21:52.633252 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK41jM6hmaD5ZnyMQHqc2EsE2EWhONEt2nB97dMD97zp) 2025-09-10 00:21:52.633268 | orchestrator | 2025-09-10 00:21:52.633280 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:52.633291 | orchestrator | Wednesday 10 September 2025 00:21:49 +0000 (0:00:01.126) 0:00:24.979 *** 2025-09-10 00:21:52.633302 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQk9LCVed2JNcBQGPHUBsQTjrw149bkWH3Ek8zT46Xzuuit3FOcoc25r7HbASwS2GYLNdZyY1HLLd8bEGUsQtN7CfAeldeGelDSt49EBCTghN7myV5wwNI6uw6xVirSQFpvqWyy8PlPkuAQn/QNZbmkYgsnKb083AFKlvtDcSCt0AECCtcYBh5mv2yUeWW/MrNFlpZzlGEba3OZEDuakBSkkbu0dE7+Co0vnJ5R6GaT1pZDlrfa26FJkw7YqI6GzXSs/tC2hbeblzf5Yj+p5bd5IFoKnmav3lkQwIwnujOi3AmkIGWL/W+many9nZR4EtE1dVAt2xptkEinbt6KY03HeZeQcUvSgGDGLxiqAye/GZOC+tH+z8a3LmxS0s1pzfyhIyOUeTO/B/ndGZzsjOA8Vijd48NCI2nO6m+bxGK34/2EmyZiChWCASi7bGIpxNDnQxl7GaYyaYJPDlA3MbBnbxXvaTgBcSpt6HexZlZQpi04wbTLaTgkhVeHexWVK0=) 2025-09-10 00:21:52.633342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBESuZ3HgOnUQTx0jZVFL4sdYDNjn/dPA+OaccR9vMhrsFsuefrK9wwfxxH6iSl7SRQjScYHfBB5hdPD0QdnLwqQ=) 2025-09-10 00:21:52.633354 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIyq4o/q5+TqbaUfdaOo2Dn+eK/smP93Px2oKRkFDPeE) 2025-09-10 00:21:52.633365 | orchestrator | 2025-09-10 00:21:52.633376 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-10 00:21:52.633387 | orchestrator | Wednesday 10 September 2025 00:21:50 +0000 (0:00:01.108) 0:00:26.087 *** 2025-09-10 00:21:52.633399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdk32ROb9++CV4daPoAJ7njw/8Jel28GIHqMJDV6sNXgVofEnVP/d/CcOTQYwqikijqH55Q3xssxlWdTVCBI3wHkUawquHBOal0R7MqAFdk/9YV+ylwcmi/4ucyZCxT2EfpW+fcGSYX5W7573aWjCU7YrQ7ZwqY1iUuuha+6pcGdgF6lSD+T0MvCKqXxxFgp0gjRK0y8EhjLTA03WXxFOBOrCj+/PfYm/XntKjkqkCKv3t+4lRdVLQlbCb5nsiodOetEW9UZZ/0tuWsp+ZCJPgr/tDsMTDulTN9Hz1UNSCdXRnAPSRg5dedVAbHtmZnUHZ/CGD1WjqH1gXquAe9phbSWOLxjp8DZlNtdNGdPRm8y9zPpGn1GYYe+3Ztcc/yDqWinwwzfbGQTYqtOFlugIxBgBaiijv9vSIK/rnPTVsUez23TrDJqEPIKqzF2KuITX1+9vZbtJ7aO6aVE/qTMcrU9VQ8Ie9lL0oQGnSlg248RDUYxf+fnTxy++ULCrAksU=) 2025-09-10 00:21:52.633410 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWvrVq9Gtz9lECvXLV+Ga9UPbm37a34Ka/K5GfcT9uHQT7OEiaGBLdGHeN/Ooj5/YnkdvM5fus/R4KtFCKMPqw=) 2025-09-10 00:21:52.633422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOcTrW0lf0iqfr+rDYJCwGoJsc2oTcKTUcR0P5yCCYWg) 2025-09-10 00:21:52.633433 | orchestrator | 2025-09-10 00:21:52.633443 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-10 00:21:52.633454 | orchestrator | Wednesday 10 September 2025 00:21:51 +0000 (0:00:01.113) 0:00:27.201 *** 2025-09-10 00:21:52.633466 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-10 00:21:52.633477 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-10 00:21:52.633488 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-10 00:21:52.633498 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-10 00:21:52.633509 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-10 00:21:52.633519 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-10 00:21:52.633530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-10 00:21:52.633542 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:21:52.633554 | orchestrator | 2025-09-10 00:21:52.633582 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-10 00:21:52.633596 | orchestrator | Wednesday 10 September 2025 00:21:51 +0000 (0:00:00.166) 0:00:27.368 *** 2025-09-10 00:21:52.633609 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:21:52.633621 | orchestrator | 2025-09-10 00:21:52.633634 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-10 00:21:52.633647 | orchestrator | Wednesday 10 September 2025 00:21:51 +0000 (0:00:00.065) 0:00:27.433 *** 2025-09-10 00:21:52.633659 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:21:52.633671 | orchestrator | 2025-09-10 00:21:52.633684 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-10 00:21:52.633697 | orchestrator | Wednesday 10 September 2025 00:21:51 +0000 (0:00:00.050) 0:00:27.484 *** 2025-09-10 00:21:52.633716 | orchestrator | changed: [testbed-manager] 2025-09-10 00:21:52.633729 | orchestrator | 2025-09-10 00:21:52.633742 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:21:52.633755 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:21:52.633769 | orchestrator | 2025-09-10 00:21:52.633781 | orchestrator | 2025-09-10 00:21:52.633793 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:21:52.633805 | orchestrator | Wednesday 10 September 2025 00:21:52 +0000 (0:00:00.512) 0:00:27.996 *** 2025-09-10 00:21:52.633817 | orchestrator | =============================================================================== 2025-09-10 00:21:52.633829 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.10s 2025-09-10 00:21:52.633842 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2025-09-10 00:21:52.633872 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-09-10 00:21:52.633884 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-10 00:21:52.633896 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-10 00:21:52.633908 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-10 00:21:52.633920 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-10 00:21:52.633932 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-10 00:21:52.633942 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-10 00:21:52.633953 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-10 00:21:52.633964 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-10 00:21:52.633974 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-10 00:21:52.633985 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-10 00:21:52.633995 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-10 00:21:52.634006 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-10 00:21:52.634085 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-10 00:21:52.634100 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2025-09-10 00:21:52.634111 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-09-10 00:21:52.634122 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-10 00:21:52.634138 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-10 00:21:52.921339 | orchestrator | + osism apply squid 2025-09-10 00:22:05.016548 | orchestrator | 2025-09-10 00:22:05 | INFO  | Task 1d6d5b6d-db96-4143-9f2c-f62a0e7f2ef7 (squid) was prepared for execution. 2025-09-10 00:22:05.016660 | orchestrator | 2025-09-10 00:22:05 | INFO  | It takes a moment until task 1d6d5b6d-db96-4143-9f2c-f62a0e7f2ef7 (squid) has been started and output is visible here. 2025-09-10 00:23:59.959153 | orchestrator | 2025-09-10 00:23:59.959319 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-10 00:23:59.959339 | orchestrator | 2025-09-10 00:23:59.959351 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-10 00:23:59.959363 | orchestrator | Wednesday 10 September 2025 00:22:08 +0000 (0:00:00.166) 0:00:00.166 *** 2025-09-10 00:23:59.959374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-10 00:23:59.959386 | orchestrator | 2025-09-10 00:23:59.959397 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-10 00:23:59.959436 | orchestrator | Wednesday 10 September 2025 00:22:09 +0000 (0:00:00.100) 0:00:00.267 *** 2025-09-10 00:23:59.959448 | orchestrator | ok: [testbed-manager] 2025-09-10 00:23:59.959460 | orchestrator | 2025-09-10 00:23:59.959472 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-10 00:23:59.959483 | orchestrator | Wednesday 10 September 2025 00:22:10 +0000 (0:00:01.735) 0:00:02.002 *** 2025-09-10 00:23:59.959495 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-10 00:23:59.959506 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-10 00:23:59.959517 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-10 00:23:59.959528 | orchestrator | 2025-09-10 00:23:59.959539 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-10 00:23:59.959550 | orchestrator | Wednesday 10 September 2025 00:22:12 +0000 (0:00:01.232) 0:00:03.235 *** 2025-09-10 00:23:59.959561 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-10 00:23:59.959572 | orchestrator | 2025-09-10 00:23:59.959583 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-10 00:23:59.959594 | orchestrator | Wednesday 10 September 2025 00:22:13 +0000 (0:00:01.112) 0:00:04.347 *** 2025-09-10 00:23:59.959604 | orchestrator | ok: [testbed-manager] 2025-09-10 00:23:59.959615 | orchestrator | 2025-09-10 00:23:59.959626 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-10 00:23:59.959637 | orchestrator | Wednesday 10 September 2025 00:22:13 +0000 (0:00:00.384) 0:00:04.732 *** 2025-09-10 00:23:59.959648 | orchestrator | changed: [testbed-manager] 2025-09-10 00:23:59.959659 | orchestrator | 2025-09-10 00:23:59.959670 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-10 00:23:59.959681 | orchestrator | Wednesday 10 September 2025 00:22:14 +0000 (0:00:00.913) 0:00:05.645 *** 2025-09-10 00:23:59.959692 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-10 00:23:59.959706 | orchestrator | ok: [testbed-manager] 2025-09-10 00:23:59.959718 | orchestrator | 2025-09-10 00:23:59.959731 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-10 00:23:59.959744 | orchestrator | Wednesday 10 September 2025 00:22:46 +0000 (0:00:32.072) 0:00:37.717 *** 2025-09-10 00:23:59.959757 | orchestrator | changed: [testbed-manager] 2025-09-10 00:23:59.959769 | orchestrator | 2025-09-10 00:23:59.959781 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-10 00:23:59.959794 | orchestrator | Wednesday 10 September 2025 00:22:58 +0000 (0:00:12.341) 0:00:50.059 *** 2025-09-10 00:23:59.959808 | orchestrator | Pausing for 60 seconds 2025-09-10 00:23:59.959821 | orchestrator | changed: [testbed-manager] 2025-09-10 00:23:59.959834 | orchestrator | 2025-09-10 00:23:59.959847 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-10 00:23:59.959860 | orchestrator | Wednesday 10 September 2025 00:23:58 +0000 (0:01:00.079) 0:01:50.139 *** 2025-09-10 00:23:59.959872 | orchestrator | ok: [testbed-manager] 2025-09-10 00:23:59.959885 | orchestrator | 2025-09-10 00:23:59.959897 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-10 00:23:59.959910 | orchestrator | Wednesday 10 September 2025 00:23:59 +0000 (0:00:00.079) 0:01:50.218 *** 2025-09-10 00:23:59.959923 | orchestrator | changed: [testbed-manager] 2025-09-10 00:23:59.959935 | orchestrator | 2025-09-10 00:23:59.959948 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:23:59.959961 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:23:59.959973 | orchestrator | 2025-09-10 00:23:59.959986 | orchestrator | 2025-09-10 00:23:59.959998 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:23:59.960010 | orchestrator | Wednesday 10 September 2025 00:23:59 +0000 (0:00:00.682) 0:01:50.901 *** 2025-09-10 00:23:59.960030 | orchestrator | =============================================================================== 2025-09-10 00:23:59.960044 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-10 00:23:59.960055 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.07s 2025-09-10 00:23:59.960066 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.34s 2025-09-10 00:23:59.960076 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.74s 2025-09-10 00:23:59.960087 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.23s 2025-09-10 00:23:59.960098 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2025-09-10 00:23:59.960109 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2025-09-10 00:23:59.960120 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2025-09-10 00:23:59.960131 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-09-10 00:23:59.960142 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-09-10 00:23:59.960153 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-09-10 00:24:00.261370 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-10 00:24:00.261974 | orchestrator | ++ semver latest 9.0.0 2025-09-10 00:24:00.317455 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-10 00:24:00.317536 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-10 00:24:00.318138 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-10 00:24:12.308423 | orchestrator | 2025-09-10 00:24:12 | INFO  | Task 9a579a02-54a8-4418-8780-850f5e4f7841 (operator) was prepared for execution. 2025-09-10 00:24:12.308519 | orchestrator | 2025-09-10 00:24:12 | INFO  | It takes a moment until task 9a579a02-54a8-4418-8780-850f5e4f7841 (operator) has been started and output is visible here. 2025-09-10 00:24:29.816735 | orchestrator | 2025-09-10 00:24:29.816846 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-10 00:24:29.816863 | orchestrator | 2025-09-10 00:24:29.816875 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-10 00:24:29.816887 | orchestrator | Wednesday 10 September 2025 00:24:16 +0000 (0:00:00.153) 0:00:00.153 *** 2025-09-10 00:24:29.816914 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:24:29.816927 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:24:29.816938 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:24:29.816949 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:24:29.816960 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:24:29.816971 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:24:29.816982 | orchestrator | 2025-09-10 00:24:29.816993 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-10 00:24:29.817004 | orchestrator | Wednesday 10 September 2025 00:24:20 +0000 (0:00:04.562) 0:00:04.716 *** 2025-09-10 00:24:29.817015 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:24:29.817026 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:24:29.817038 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:24:29.817049 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:24:29.817060 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:24:29.817071 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:24:29.817081 | orchestrator | 2025-09-10 00:24:29.817092 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-10 00:24:29.817103 | orchestrator | 2025-09-10 00:24:29.817114 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-10 00:24:29.817126 | orchestrator | Wednesday 10 September 2025 00:24:21 +0000 (0:00:00.970) 0:00:05.687 *** 2025-09-10 00:24:29.817137 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:24:29.817148 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:24:29.817159 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:24:29.817170 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:24:29.817181 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:24:29.817191 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:24:29.817259 | orchestrator | 2025-09-10 00:24:29.817272 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-10 00:24:29.817285 | orchestrator | Wednesday 10 September 2025 00:24:21 +0000 (0:00:00.171) 0:00:05.858 *** 2025-09-10 00:24:29.817298 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:24:29.817310 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:24:29.817322 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:24:29.817335 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:24:29.817347 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:24:29.817359 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:24:29.817371 | orchestrator | 2025-09-10 00:24:29.817383 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-10 00:24:29.817396 | orchestrator | Wednesday 10 September 2025 00:24:22 +0000 (0:00:00.163) 0:00:06.022 *** 2025-09-10 00:24:29.817409 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:24:29.817423 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:24:29.817436 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:24:29.817448 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:24:29.817462 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:24:29.817475 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:24:29.817487 | orchestrator | 2025-09-10 00:24:29.817499 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-10 00:24:29.817512 | orchestrator | Wednesday 10 September 2025 00:24:22 +0000 (0:00:00.611) 0:00:06.633 *** 2025-09-10 00:24:29.817525 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:24:29.817537 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:24:29.817549 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:24:29.817561 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:24:29.817574 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:24:29.817586 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:24:29.817598 | orchestrator | 2025-09-10 00:24:29.817611 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-10 00:24:29.817624 | orchestrator | Wednesday 10 September 2025 00:24:23 +0000 (0:00:00.871) 0:00:07.505 *** 2025-09-10 00:24:29.817636 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-10 00:24:29.817647 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-10 00:24:29.817658 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-10 00:24:29.817669 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-10 00:24:29.817680 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-10 00:24:29.817690 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-10 00:24:29.817701 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-10 00:24:29.817712 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-10 00:24:29.817723 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-10 00:24:29.817733 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-10 00:24:29.817744 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-10 00:24:29.817755 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-10 00:24:29.817766 | orchestrator | 2025-09-10 00:24:29.817776 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-10 00:24:29.817788 | orchestrator | Wednesday 10 September 2025 00:24:24 +0000 (0:00:01.371) 0:00:08.876 *** 2025-09-10 00:24:29.817803 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:24:29.817815 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:24:29.817826 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:24:29.817837 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:24:29.817847 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:24:29.817858 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:24:29.817869 | orchestrator | 2025-09-10 00:24:29.817880 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-10 00:24:29.817891 | orchestrator | Wednesday 10 September 2025 00:24:26 +0000 (0:00:01.325) 0:00:10.201 *** 2025-09-10 00:24:29.817902 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-10 00:24:29.817921 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-10 00:24:29.817932 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-10 00:24:29.817943 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:24:29.817970 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:24:29.817982 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:24:29.817993 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:24:29.818003 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:24:29.818062 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-10 00:24:29.818075 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-10 00:24:29.818086 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-10 00:24:29.818096 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-10 00:24:29.818107 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-10 00:24:29.818118 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-10 00:24:29.818128 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-10 00:24:29.818139 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:24:29.818150 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:24:29.818161 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:24:29.818172 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:24:29.818183 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:24:29.818193 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-10 00:24:29.818225 | orchestrator | 2025-09-10 00:24:29.818236 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-10 00:24:29.818248 | orchestrator | Wednesday 10 September 2025 00:24:27 +0000 (0:00:01.254) 0:00:11.456 *** 2025-09-10 00:24:29.818259 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:24:29.818269 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:24:29.818280 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:24:29.818291 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:24:29.818301 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:24:29.818312 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:24:29.818323 | orchestrator | 2025-09-10 00:24:29.818333 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-10 00:24:29.818344 | orchestrator | Wednesday 10 September 2025 00:24:27 +0000 (0:00:00.173) 0:00:11.629 *** 2025-09-10 00:24:29.818355 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:24:29.818365 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:24:29.818376 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:24:29.818386 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:24:29.818397 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:24:29.818408 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:24:29.818419 | orchestrator | 2025-09-10 00:24:29.818430 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-10 00:24:29.818440 | orchestrator | Wednesday 10 September 2025 00:24:28 +0000 (0:00:00.622) 0:00:12.251 *** 2025-09-10 00:24:29.818451 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:24:29.818462 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:24:29.818472 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:24:29.818483 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:24:29.818493 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:24:29.818504 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:24:29.818515 | orchestrator | 2025-09-10 00:24:29.818525 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-10 00:24:29.818544 | orchestrator | Wednesday 10 September 2025 00:24:28 +0000 (0:00:00.197) 0:00:12.449 *** 2025-09-10 00:24:29.818555 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 00:24:29.818570 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 00:24:29.818581 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:24:29.818592 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:24:29.818603 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-10 00:24:29.818613 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:24:29.818624 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 00:24:29.818634 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:24:29.818645 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 00:24:29.818655 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:24:29.818666 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-10 00:24:29.818676 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:24:29.818686 | orchestrator | 2025-09-10 00:24:29.818697 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-10 00:24:29.818708 | orchestrator | Wednesday 10 September 2025 00:24:29 +0000 (0:00:00.692) 0:00:13.141 *** 2025-09-10 00:24:29.818719 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:24:29.818729 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:24:29.818739 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:24:29.818750 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:24:29.818761 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:24:29.818771 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:24:29.818782 | orchestrator | 2025-09-10 00:24:29.818792 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-10 00:24:29.818809 | orchestrator | Wednesday 10 September 2025 00:24:29 +0000 (0:00:00.170) 0:00:13.312 *** 2025-09-10 00:24:29.818820 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:24:29.818831 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:24:29.818842 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:24:29.818852 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:24:29.818863 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:24:29.818873 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:24:29.818884 | orchestrator | 2025-09-10 00:24:29.818894 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-10 00:24:29.818905 | orchestrator | Wednesday 10 September 2025 00:24:29 +0000 (0:00:00.206) 0:00:13.518 *** 2025-09-10 00:24:29.818916 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:24:29.818927 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:24:29.818937 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:24:29.818948 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:24:29.818966 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:24:31.065856 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:24:31.065962 | orchestrator | 2025-09-10 00:24:31.065979 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-10 00:24:31.065992 | orchestrator | Wednesday 10 September 2025 00:24:29 +0000 (0:00:00.177) 0:00:13.696 *** 2025-09-10 00:24:31.066003 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:24:31.066067 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:24:31.066080 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:24:31.066091 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:24:31.066102 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:24:31.066113 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:24:31.066124 | orchestrator | 2025-09-10 00:24:31.066135 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-10 00:24:31.066147 | orchestrator | Wednesday 10 September 2025 00:24:30 +0000 (0:00:00.625) 0:00:14.321 *** 2025-09-10 00:24:31.066158 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:24:31.066169 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:24:31.066179 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:24:31.066270 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:24:31.066283 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:24:31.066294 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:24:31.066305 | orchestrator | 2025-09-10 00:24:31.066316 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:24:31.066328 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:24:31.066340 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:24:31.066351 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:24:31.066362 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:24:31.066373 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:24:31.066383 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:24:31.066396 | orchestrator | 2025-09-10 00:24:31.066410 | orchestrator | 2025-09-10 00:24:31.066422 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:24:31.066436 | orchestrator | Wednesday 10 September 2025 00:24:30 +0000 (0:00:00.265) 0:00:14.586 *** 2025-09-10 00:24:31.066449 | orchestrator | =============================================================================== 2025-09-10 00:24:31.066461 | orchestrator | Gathering Facts --------------------------------------------------------- 4.56s 2025-09-10 00:24:31.066474 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.37s 2025-09-10 00:24:31.066486 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.33s 2025-09-10 00:24:31.066499 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2025-09-10 00:24:31.066512 | orchestrator | Do not require tty for all users ---------------------------------------- 0.97s 2025-09-10 00:24:31.066525 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-09-10 00:24:31.066538 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-09-10 00:24:31.066550 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-09-10 00:24:31.066562 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2025-09-10 00:24:31.066574 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-09-10 00:24:31.066588 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2025-09-10 00:24:31.066601 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.21s 2025-09-10 00:24:31.066614 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-09-10 00:24:31.066626 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-09-10 00:24:31.066654 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2025-09-10 00:24:31.066666 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-09-10 00:24:31.066679 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-09-10 00:24:31.066692 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-10 00:24:31.477565 | orchestrator | + osism apply --environment custom facts 2025-09-10 00:24:33.363751 | orchestrator | 2025-09-10 00:24:33 | INFO  | Trying to run play facts in environment custom 2025-09-10 00:24:43.450410 | orchestrator | 2025-09-10 00:24:43 | INFO  | Task db517538-e87c-40fb-a705-78f45c912dee (facts) was prepared for execution. 2025-09-10 00:24:43.450527 | orchestrator | 2025-09-10 00:24:43 | INFO  | It takes a moment until task db517538-e87c-40fb-a705-78f45c912dee (facts) has been started and output is visible here. 2025-09-10 00:25:26.297358 | orchestrator | 2025-09-10 00:25:26.297471 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-10 00:25:26.297489 | orchestrator | 2025-09-10 00:25:26.297501 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-10 00:25:26.297513 | orchestrator | Wednesday 10 September 2025 00:24:47 +0000 (0:00:00.085) 0:00:00.085 *** 2025-09-10 00:25:26.297524 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:26.297537 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:26.297549 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:25:26.297560 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:25:26.297571 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:25:26.297582 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:26.297592 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:26.297603 | orchestrator | 2025-09-10 00:25:26.297614 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-10 00:25:26.297625 | orchestrator | Wednesday 10 September 2025 00:24:48 +0000 (0:00:01.411) 0:00:01.496 *** 2025-09-10 00:25:26.297636 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:26.297647 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:26.297659 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:26.297669 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:25:26.297680 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:25:26.297691 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:25:26.297702 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:26.297713 | orchestrator | 2025-09-10 00:25:26.297724 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-10 00:25:26.297735 | orchestrator | 2025-09-10 00:25:26.297746 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-10 00:25:26.297758 | orchestrator | Wednesday 10 September 2025 00:24:49 +0000 (0:00:01.139) 0:00:02.636 *** 2025-09-10 00:25:26.297768 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.297780 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.297790 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.297801 | orchestrator | 2025-09-10 00:25:26.297812 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-10 00:25:26.297825 | orchestrator | Wednesday 10 September 2025 00:24:49 +0000 (0:00:00.116) 0:00:02.752 *** 2025-09-10 00:25:26.297835 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.297847 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.297857 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.297868 | orchestrator | 2025-09-10 00:25:26.297879 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-10 00:25:26.297891 | orchestrator | Wednesday 10 September 2025 00:24:50 +0000 (0:00:00.222) 0:00:02.975 *** 2025-09-10 00:25:26.297902 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.297913 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.297924 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.297935 | orchestrator | 2025-09-10 00:25:26.297945 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-10 00:25:26.297957 | orchestrator | Wednesday 10 September 2025 00:24:50 +0000 (0:00:00.191) 0:00:03.166 *** 2025-09-10 00:25:26.297969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:25:26.297982 | orchestrator | 2025-09-10 00:25:26.297993 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-10 00:25:26.298004 | orchestrator | Wednesday 10 September 2025 00:24:50 +0000 (0:00:00.159) 0:00:03.326 *** 2025-09-10 00:25:26.298094 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.298109 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.298121 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.298132 | orchestrator | 2025-09-10 00:25:26.298142 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-10 00:25:26.298154 | orchestrator | Wednesday 10 September 2025 00:24:51 +0000 (0:00:00.450) 0:00:03.777 *** 2025-09-10 00:25:26.298165 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:25:26.298176 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:25:26.298186 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:25:26.298197 | orchestrator | 2025-09-10 00:25:26.298208 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-10 00:25:26.298239 | orchestrator | Wednesday 10 September 2025 00:24:51 +0000 (0:00:00.103) 0:00:03.881 *** 2025-09-10 00:25:26.298250 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:26.298261 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:26.298272 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:26.298283 | orchestrator | 2025-09-10 00:25:26.298293 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-10 00:25:26.298305 | orchestrator | Wednesday 10 September 2025 00:24:52 +0000 (0:00:01.002) 0:00:04.883 *** 2025-09-10 00:25:26.298316 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.298327 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.298337 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.298348 | orchestrator | 2025-09-10 00:25:26.298359 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-10 00:25:26.298372 | orchestrator | Wednesday 10 September 2025 00:24:52 +0000 (0:00:00.477) 0:00:05.360 *** 2025-09-10 00:25:26.298383 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:26.298394 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:26.298405 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:26.298416 | orchestrator | 2025-09-10 00:25:26.298427 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-10 00:25:26.298438 | orchestrator | Wednesday 10 September 2025 00:24:53 +0000 (0:00:01.033) 0:00:06.394 *** 2025-09-10 00:25:26.298467 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:26.298478 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:26.298489 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:26.298500 | orchestrator | 2025-09-10 00:25:26.298511 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-10 00:25:26.298522 | orchestrator | Wednesday 10 September 2025 00:25:10 +0000 (0:00:16.760) 0:00:23.155 *** 2025-09-10 00:25:26.298533 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:25:26.298544 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:25:26.298555 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:25:26.298565 | orchestrator | 2025-09-10 00:25:26.298576 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-10 00:25:26.298608 | orchestrator | Wednesday 10 September 2025 00:25:10 +0000 (0:00:00.128) 0:00:23.283 *** 2025-09-10 00:25:26.298619 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:26.298630 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:26.298641 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:26.298652 | orchestrator | 2025-09-10 00:25:26.298663 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-10 00:25:26.298674 | orchestrator | Wednesday 10 September 2025 00:25:17 +0000 (0:00:06.732) 0:00:30.015 *** 2025-09-10 00:25:26.298685 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.298696 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.298706 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.298717 | orchestrator | 2025-09-10 00:25:26.298728 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-10 00:25:26.298739 | orchestrator | Wednesday 10 September 2025 00:25:17 +0000 (0:00:00.430) 0:00:30.446 *** 2025-09-10 00:25:26.298750 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-10 00:25:26.298762 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-10 00:25:26.298779 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-10 00:25:26.298790 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-10 00:25:26.298801 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-10 00:25:26.298812 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-10 00:25:26.298823 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-10 00:25:26.298833 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-10 00:25:26.298844 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-10 00:25:26.298855 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-10 00:25:26.298866 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-10 00:25:26.298877 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-10 00:25:26.298888 | orchestrator | 2025-09-10 00:25:26.298899 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-10 00:25:26.298909 | orchestrator | Wednesday 10 September 2025 00:25:21 +0000 (0:00:03.461) 0:00:33.907 *** 2025-09-10 00:25:26.298920 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.298931 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.298942 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.298953 | orchestrator | 2025-09-10 00:25:26.298964 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-10 00:25:26.298975 | orchestrator | 2025-09-10 00:25:26.298986 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-10 00:25:26.298997 | orchestrator | Wednesday 10 September 2025 00:25:22 +0000 (0:00:01.175) 0:00:35.083 *** 2025-09-10 00:25:26.299008 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:25:26.299019 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:25:26.299029 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:25:26.299040 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:26.299051 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:26.299062 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:26.299072 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:26.299083 | orchestrator | 2025-09-10 00:25:26.299094 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:25:26.299106 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:25:26.299117 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:25:26.299130 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:25:26.299141 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:25:26.299152 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:25:26.299164 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:25:26.299180 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:25:26.299191 | orchestrator | 2025-09-10 00:25:26.299202 | orchestrator | 2025-09-10 00:25:26.299230 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:25:26.299241 | orchestrator | Wednesday 10 September 2025 00:25:26 +0000 (0:00:03.957) 0:00:39.041 *** 2025-09-10 00:25:26.299252 | orchestrator | =============================================================================== 2025-09-10 00:25:26.299270 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.76s 2025-09-10 00:25:26.299281 | orchestrator | Install required packages (Debian) -------------------------------------- 6.73s 2025-09-10 00:25:26.299292 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.96s 2025-09-10 00:25:26.299303 | orchestrator | Copy fact files --------------------------------------------------------- 3.46s 2025-09-10 00:25:26.299313 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2025-09-10 00:25:26.299325 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.18s 2025-09-10 00:25:26.299343 | orchestrator | Copy fact file ---------------------------------------------------------- 1.14s 2025-09-10 00:25:26.559609 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-09-10 00:25:26.559706 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-09-10 00:25:26.559720 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-09-10 00:25:26.559731 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-09-10 00:25:26.559741 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-09-10 00:25:26.559751 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-09-10 00:25:26.559760 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-09-10 00:25:26.559770 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-09-10 00:25:26.559780 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-09-10 00:25:26.559790 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-10 00:25:26.559800 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-10 00:25:26.856957 | orchestrator | + osism apply bootstrap 2025-09-10 00:25:38.819800 | orchestrator | 2025-09-10 00:25:38 | INFO  | Task 77118469-3895-4548-8367-d9382bee2d21 (bootstrap) was prepared for execution. 2025-09-10 00:25:38.819917 | orchestrator | 2025-09-10 00:25:38 | INFO  | It takes a moment until task 77118469-3895-4548-8367-d9382bee2d21 (bootstrap) has been started and output is visible here. 2025-09-10 00:25:54.801771 | orchestrator | 2025-09-10 00:25:54.801892 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-10 00:25:54.801910 | orchestrator | 2025-09-10 00:25:54.801922 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-10 00:25:54.801934 | orchestrator | Wednesday 10 September 2025 00:25:42 +0000 (0:00:00.185) 0:00:00.185 *** 2025-09-10 00:25:54.801945 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:54.801957 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:25:54.801968 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:25:54.801979 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:25:54.801989 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:54.802000 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:54.802010 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:54.802085 | orchestrator | 2025-09-10 00:25:54.802098 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-10 00:25:54.802109 | orchestrator | 2025-09-10 00:25:54.802120 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-10 00:25:54.802140 | orchestrator | Wednesday 10 September 2025 00:25:43 +0000 (0:00:00.290) 0:00:00.475 *** 2025-09-10 00:25:54.802151 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:25:54.802162 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:25:54.802173 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:25:54.802184 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:54.802195 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:54.802206 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:54.802216 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:54.802284 | orchestrator | 2025-09-10 00:25:54.802298 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-10 00:25:54.802311 | orchestrator | 2025-09-10 00:25:54.802323 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-10 00:25:54.802336 | orchestrator | Wednesday 10 September 2025 00:25:46 +0000 (0:00:03.668) 0:00:04.144 *** 2025-09-10 00:25:54.802349 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-10 00:25:54.802362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-10 00:25:54.802374 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-10 00:25:54.802387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-10 00:25:54.802399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-10 00:25:54.802412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-10 00:25:54.802425 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-10 00:25:54.802437 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-10 00:25:54.802449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-10 00:25:54.802461 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-10 00:25:54.802473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-10 00:25:54.802486 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-10 00:25:54.802498 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-10 00:25:54.802510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-10 00:25:54.802523 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-10 00:25:54.802536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-10 00:25:54.802549 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-10 00:25:54.802561 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:25:54.802573 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-10 00:25:54.802585 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:25:54.802598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-10 00:25:54.802611 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-10 00:25:54.802623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-10 00:25:54.802635 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-10 00:25:54.802648 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-10 00:25:54.802659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-10 00:25:54.802670 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-10 00:25:54.802681 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-10 00:25:54.802691 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-10 00:25:54.802702 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-10 00:25:54.802712 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-10 00:25:54.802723 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-10 00:25:54.802734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-10 00:25:54.802745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-10 00:25:54.802755 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-10 00:25:54.802784 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-10 00:25:54.802795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-10 00:25:54.802806 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-10 00:25:54.802817 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-10 00:25:54.802827 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:25:54.802838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-10 00:25:54.802856 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-10 00:25:54.802868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-10 00:25:54.802879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-10 00:25:54.802890 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:25:54.802901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-10 00:25:54.802931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:25:54.802943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-10 00:25:54.802953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-10 00:25:54.802964 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:25:54.802975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:25:54.802986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-10 00:25:54.802996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:25:54.803007 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:25:54.803018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-10 00:25:54.803029 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:25:54.803039 | orchestrator | 2025-09-10 00:25:54.803050 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-10 00:25:54.803061 | orchestrator | 2025-09-10 00:25:54.803072 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-10 00:25:54.803083 | orchestrator | Wednesday 10 September 2025 00:25:47 +0000 (0:00:00.409) 0:00:04.553 *** 2025-09-10 00:25:54.803094 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:54.803105 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:54.803116 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:54.803126 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:54.803137 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:25:54.803148 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:25:54.803159 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:25:54.803169 | orchestrator | 2025-09-10 00:25:54.803180 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-10 00:25:54.803191 | orchestrator | Wednesday 10 September 2025 00:25:48 +0000 (0:00:01.306) 0:00:05.860 *** 2025-09-10 00:25:54.803202 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:54.803212 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:25:54.803223 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:25:54.803253 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:25:54.803264 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:25:54.803275 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:25:54.803286 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:25:54.803296 | orchestrator | 2025-09-10 00:25:54.803307 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-10 00:25:54.803318 | orchestrator | Wednesday 10 September 2025 00:25:49 +0000 (0:00:01.243) 0:00:07.104 *** 2025-09-10 00:25:54.803330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:25:54.803344 | orchestrator | 2025-09-10 00:25:54.803355 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-10 00:25:54.803366 | orchestrator | Wednesday 10 September 2025 00:25:50 +0000 (0:00:00.288) 0:00:07.392 *** 2025-09-10 00:25:54.803377 | orchestrator | changed: [testbed-manager] 2025-09-10 00:25:54.803388 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:54.803404 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:54.803415 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:25:54.803426 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:54.803437 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:25:54.803448 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:25:54.803458 | orchestrator | 2025-09-10 00:25:54.803476 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-10 00:25:54.803487 | orchestrator | Wednesday 10 September 2025 00:25:52 +0000 (0:00:02.032) 0:00:09.424 *** 2025-09-10 00:25:54.803498 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:25:54.803511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:25:54.803523 | orchestrator | 2025-09-10 00:25:54.803534 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-10 00:25:54.803545 | orchestrator | Wednesday 10 September 2025 00:25:52 +0000 (0:00:00.268) 0:00:09.692 *** 2025-09-10 00:25:54.803556 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:25:54.803567 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:54.803577 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:25:54.803588 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:54.803599 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:54.803609 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:25:54.803620 | orchestrator | 2025-09-10 00:25:54.803631 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-10 00:25:54.803642 | orchestrator | Wednesday 10 September 2025 00:25:53 +0000 (0:00:01.170) 0:00:10.863 *** 2025-09-10 00:25:54.803652 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:25:54.803663 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:25:54.803674 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:25:54.803685 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:25:54.803696 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:25:54.803707 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:25:54.803717 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:25:54.803728 | orchestrator | 2025-09-10 00:25:54.803739 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-10 00:25:54.803750 | orchestrator | Wednesday 10 September 2025 00:25:54 +0000 (0:00:00.556) 0:00:11.420 *** 2025-09-10 00:25:54.803761 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:25:54.803771 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:25:54.803782 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:25:54.803793 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:25:54.803804 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:25:54.803814 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:25:54.803825 | orchestrator | ok: [testbed-manager] 2025-09-10 00:25:54.803836 | orchestrator | 2025-09-10 00:25:54.803847 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-10 00:25:54.803859 | orchestrator | Wednesday 10 September 2025 00:25:54 +0000 (0:00:00.450) 0:00:11.871 *** 2025-09-10 00:25:54.803870 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:25:54.803881 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:25:54.803898 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:26:06.857178 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:26:06.857288 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:26:06.857299 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:26:06.857307 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:26:06.857315 | orchestrator | 2025-09-10 00:26:06.857324 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-10 00:26:06.857332 | orchestrator | Wednesday 10 September 2025 00:25:54 +0000 (0:00:00.293) 0:00:12.165 *** 2025-09-10 00:26:06.857342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:06.857363 | orchestrator | 2025-09-10 00:26:06.857371 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-10 00:26:06.857379 | orchestrator | Wednesday 10 September 2025 00:25:55 +0000 (0:00:00.289) 0:00:12.455 *** 2025-09-10 00:26:06.857408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:06.857416 | orchestrator | 2025-09-10 00:26:06.857424 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-10 00:26:06.857431 | orchestrator | Wednesday 10 September 2025 00:25:55 +0000 (0:00:00.367) 0:00:12.822 *** 2025-09-10 00:26:06.857438 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.857447 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.857454 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.857461 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.857468 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.857475 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.857482 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.857489 | orchestrator | 2025-09-10 00:26:06.857496 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-10 00:26:06.857503 | orchestrator | Wednesday 10 September 2025 00:25:56 +0000 (0:00:01.280) 0:00:14.103 *** 2025-09-10 00:26:06.857511 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:26:06.857518 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:26:06.857525 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:26:06.857532 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:26:06.857539 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:26:06.857547 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:26:06.857554 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:26:06.857561 | orchestrator | 2025-09-10 00:26:06.857568 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-10 00:26:06.857575 | orchestrator | Wednesday 10 September 2025 00:25:57 +0000 (0:00:00.221) 0:00:14.324 *** 2025-09-10 00:26:06.857583 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.857590 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.857597 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.857604 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.857611 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.857619 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.857626 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.857633 | orchestrator | 2025-09-10 00:26:06.857640 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-10 00:26:06.857647 | orchestrator | Wednesday 10 September 2025 00:25:57 +0000 (0:00:00.546) 0:00:14.871 *** 2025-09-10 00:26:06.857655 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:26:06.857662 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:26:06.857670 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:26:06.857682 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:26:06.857695 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:26:06.857707 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:26:06.857719 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:26:06.857731 | orchestrator | 2025-09-10 00:26:06.857745 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-10 00:26:06.857758 | orchestrator | Wednesday 10 September 2025 00:25:57 +0000 (0:00:00.268) 0:00:15.139 *** 2025-09-10 00:26:06.857771 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.857783 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:06.857794 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:06.857807 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:06.857819 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:06.857830 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:06.857843 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:06.857855 | orchestrator | 2025-09-10 00:26:06.857869 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-10 00:26:06.857881 | orchestrator | Wednesday 10 September 2025 00:25:58 +0000 (0:00:00.589) 0:00:15.729 *** 2025-09-10 00:26:06.857894 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.857914 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:06.857927 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:06.857939 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:06.857951 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:06.857963 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:06.857977 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:06.857991 | orchestrator | 2025-09-10 00:26:06.858004 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-10 00:26:06.858063 | orchestrator | Wednesday 10 September 2025 00:25:59 +0000 (0:00:01.146) 0:00:16.876 *** 2025-09-10 00:26:06.858080 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.858092 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.858103 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.858115 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.858127 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.858139 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.858152 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.858164 | orchestrator | 2025-09-10 00:26:06.858176 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-10 00:26:06.858188 | orchestrator | Wednesday 10 September 2025 00:26:00 +0000 (0:00:01.200) 0:00:18.076 *** 2025-09-10 00:26:06.858224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:06.858238 | orchestrator | 2025-09-10 00:26:06.858289 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-10 00:26:06.858302 | orchestrator | Wednesday 10 September 2025 00:26:01 +0000 (0:00:00.450) 0:00:18.527 *** 2025-09-10 00:26:06.858313 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:26:06.858325 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:06.858337 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:06.858349 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:06.858361 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:06.858373 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:06.858384 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:06.858396 | orchestrator | 2025-09-10 00:26:06.858406 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-10 00:26:06.858419 | orchestrator | Wednesday 10 September 2025 00:26:02 +0000 (0:00:01.274) 0:00:19.801 *** 2025-09-10 00:26:06.858431 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.858443 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.858455 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.858466 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.858478 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.858490 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.858502 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.858514 | orchestrator | 2025-09-10 00:26:06.858526 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-10 00:26:06.858538 | orchestrator | Wednesday 10 September 2025 00:26:02 +0000 (0:00:00.259) 0:00:20.061 *** 2025-09-10 00:26:06.858550 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.858562 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.858573 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.858585 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.858596 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.858608 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.858621 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.858633 | orchestrator | 2025-09-10 00:26:06.858645 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-10 00:26:06.858658 | orchestrator | Wednesday 10 September 2025 00:26:03 +0000 (0:00:00.249) 0:00:20.310 *** 2025-09-10 00:26:06.858719 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.858734 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.858754 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.858766 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.858778 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.858790 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.858801 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.858812 | orchestrator | 2025-09-10 00:26:06.858823 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-10 00:26:06.858833 | orchestrator | Wednesday 10 September 2025 00:26:03 +0000 (0:00:00.231) 0:00:20.542 *** 2025-09-10 00:26:06.858850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:06.858864 | orchestrator | 2025-09-10 00:26:06.858876 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-10 00:26:06.858887 | orchestrator | Wednesday 10 September 2025 00:26:03 +0000 (0:00:00.327) 0:00:20.869 *** 2025-09-10 00:26:06.858899 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.858910 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.858920 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.858931 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.858941 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.858952 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.858964 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.858975 | orchestrator | 2025-09-10 00:26:06.858987 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-10 00:26:06.858999 | orchestrator | Wednesday 10 September 2025 00:26:04 +0000 (0:00:00.529) 0:00:21.398 *** 2025-09-10 00:26:06.859009 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:26:06.859021 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:26:06.859032 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:26:06.859044 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:26:06.859055 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:26:06.859065 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:26:06.859077 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:26:06.859088 | orchestrator | 2025-09-10 00:26:06.859099 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-10 00:26:06.859110 | orchestrator | Wednesday 10 September 2025 00:26:04 +0000 (0:00:00.220) 0:00:21.618 *** 2025-09-10 00:26:06.859121 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.859132 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:06.859143 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:06.859173 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.859187 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.859198 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.859210 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:06.859221 | orchestrator | 2025-09-10 00:26:06.859234 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-10 00:26:06.859268 | orchestrator | Wednesday 10 September 2025 00:26:05 +0000 (0:00:00.990) 0:00:22.609 *** 2025-09-10 00:26:06.859282 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.859295 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:06.859307 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:06.859319 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.859333 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:06.859358 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.859369 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.859381 | orchestrator | 2025-09-10 00:26:06.859394 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-10 00:26:06.859406 | orchestrator | Wednesday 10 September 2025 00:26:05 +0000 (0:00:00.552) 0:00:23.161 *** 2025-09-10 00:26:06.859418 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:06.859430 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:06.859442 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:06.859455 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:06.859493 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:49.106350 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:49.106465 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:49.106479 | orchestrator | 2025-09-10 00:26:49.106491 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-10 00:26:49.106503 | orchestrator | Wednesday 10 September 2025 00:26:06 +0000 (0:00:00.952) 0:00:24.113 *** 2025-09-10 00:26:49.106532 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.106544 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.106554 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.106564 | orchestrator | changed: [testbed-manager] 2025-09-10 00:26:49.106574 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:49.106583 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:49.106593 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:49.106603 | orchestrator | 2025-09-10 00:26:49.106613 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-10 00:26:49.106623 | orchestrator | Wednesday 10 September 2025 00:26:24 +0000 (0:00:17.196) 0:00:41.309 *** 2025-09-10 00:26:49.106632 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.106642 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.106652 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.106661 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.106671 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.106681 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.106690 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.106700 | orchestrator | 2025-09-10 00:26:49.106710 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-10 00:26:49.106720 | orchestrator | Wednesday 10 September 2025 00:26:24 +0000 (0:00:00.249) 0:00:41.559 *** 2025-09-10 00:26:49.106729 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.106739 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.106749 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.106758 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.106768 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.106777 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.106787 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.106797 | orchestrator | 2025-09-10 00:26:49.106807 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-10 00:26:49.106831 | orchestrator | Wednesday 10 September 2025 00:26:24 +0000 (0:00:00.233) 0:00:41.792 *** 2025-09-10 00:26:49.106841 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.106851 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.106861 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.106871 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.106881 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.106890 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.106900 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.106910 | orchestrator | 2025-09-10 00:26:49.106919 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-10 00:26:49.106929 | orchestrator | Wednesday 10 September 2025 00:26:24 +0000 (0:00:00.255) 0:00:42.048 *** 2025-09-10 00:26:49.106970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:49.106983 | orchestrator | 2025-09-10 00:26:49.106993 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-10 00:26:49.107003 | orchestrator | Wednesday 10 September 2025 00:26:25 +0000 (0:00:00.302) 0:00:42.351 *** 2025-09-10 00:26:49.107013 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.107023 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.107032 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.107042 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.107052 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.107061 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.107071 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.107098 | orchestrator | 2025-09-10 00:26:49.107108 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-10 00:26:49.107118 | orchestrator | Wednesday 10 September 2025 00:26:26 +0000 (0:00:01.736) 0:00:44.087 *** 2025-09-10 00:26:49.107128 | orchestrator | changed: [testbed-manager] 2025-09-10 00:26:49.107137 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:49.107147 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:49.107156 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:49.107166 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:49.107175 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:49.107185 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:49.107195 | orchestrator | 2025-09-10 00:26:49.107204 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-10 00:26:49.107221 | orchestrator | Wednesday 10 September 2025 00:26:27 +0000 (0:00:01.039) 0:00:45.127 *** 2025-09-10 00:26:49.107236 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.107247 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.107256 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.107266 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.107275 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.107285 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.107294 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.107321 | orchestrator | 2025-09-10 00:26:49.107370 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-10 00:26:49.107380 | orchestrator | Wednesday 10 September 2025 00:26:28 +0000 (0:00:00.794) 0:00:45.921 *** 2025-09-10 00:26:49.107391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:49.107403 | orchestrator | 2025-09-10 00:26:49.107413 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-10 00:26:49.107423 | orchestrator | Wednesday 10 September 2025 00:26:28 +0000 (0:00:00.325) 0:00:46.247 *** 2025-09-10 00:26:49.107433 | orchestrator | changed: [testbed-manager] 2025-09-10 00:26:49.107443 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:49.107453 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:49.107462 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:49.107472 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:49.107481 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:49.107491 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:49.107500 | orchestrator | 2025-09-10 00:26:49.107529 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-10 00:26:49.107539 | orchestrator | Wednesday 10 September 2025 00:26:30 +0000 (0:00:01.029) 0:00:47.276 *** 2025-09-10 00:26:49.107549 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:26:49.107559 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:26:49.107569 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:26:49.107594 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:26:49.107604 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:26:49.107614 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:26:49.107623 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:26:49.107633 | orchestrator | 2025-09-10 00:26:49.107643 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-10 00:26:49.107653 | orchestrator | Wednesday 10 September 2025 00:26:30 +0000 (0:00:00.312) 0:00:47.588 *** 2025-09-10 00:26:49.107662 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:49.107672 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:49.107682 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:49.107691 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:49.107700 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:49.107710 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:49.107720 | orchestrator | changed: [testbed-manager] 2025-09-10 00:26:49.107738 | orchestrator | 2025-09-10 00:26:49.107748 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-10 00:26:49.107757 | orchestrator | Wednesday 10 September 2025 00:26:43 +0000 (0:00:13.353) 0:01:00.941 *** 2025-09-10 00:26:49.107767 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.107777 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.107787 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.107796 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.107806 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.107816 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.107825 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.107835 | orchestrator | 2025-09-10 00:26:49.107860 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-10 00:26:49.107870 | orchestrator | Wednesday 10 September 2025 00:26:44 +0000 (0:00:01.006) 0:01:01.947 *** 2025-09-10 00:26:49.107880 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.107889 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.107899 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.107908 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.107918 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.107927 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.107937 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.107947 | orchestrator | 2025-09-10 00:26:49.107956 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-10 00:26:49.107966 | orchestrator | Wednesday 10 September 2025 00:26:45 +0000 (0:00:00.961) 0:01:02.909 *** 2025-09-10 00:26:49.107976 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.107985 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.107995 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.108004 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.108014 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.108024 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.108033 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.108043 | orchestrator | 2025-09-10 00:26:49.108053 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-10 00:26:49.108063 | orchestrator | Wednesday 10 September 2025 00:26:45 +0000 (0:00:00.254) 0:01:03.163 *** 2025-09-10 00:26:49.108072 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.108082 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.108092 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.108101 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.108111 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.108120 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.108130 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.108153 | orchestrator | 2025-09-10 00:26:49.108163 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-10 00:26:49.108173 | orchestrator | Wednesday 10 September 2025 00:26:46 +0000 (0:00:00.259) 0:01:03.422 *** 2025-09-10 00:26:49.108183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:26:49.108193 | orchestrator | 2025-09-10 00:26:49.108203 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-10 00:26:49.108212 | orchestrator | Wednesday 10 September 2025 00:26:46 +0000 (0:00:00.327) 0:01:03.750 *** 2025-09-10 00:26:49.108222 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.108232 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.108241 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.108251 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.108261 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.108270 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.108280 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.108289 | orchestrator | 2025-09-10 00:26:49.108299 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-10 00:26:49.108309 | orchestrator | Wednesday 10 September 2025 00:26:48 +0000 (0:00:01.687) 0:01:05.437 *** 2025-09-10 00:26:49.108339 | orchestrator | changed: [testbed-manager] 2025-09-10 00:26:49.108350 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:26:49.108359 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:26:49.108369 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:26:49.108378 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:26:49.108388 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:26:49.108398 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:26:49.108422 | orchestrator | 2025-09-10 00:26:49.108432 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-10 00:26:49.108442 | orchestrator | Wednesday 10 September 2025 00:26:48 +0000 (0:00:00.654) 0:01:06.092 *** 2025-09-10 00:26:49.108451 | orchestrator | ok: [testbed-manager] 2025-09-10 00:26:49.108461 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:26:49.108471 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:26:49.108480 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:26:49.108490 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:26:49.108499 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:26:49.108509 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:26:49.108519 | orchestrator | 2025-09-10 00:26:49.108535 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-10 00:29:10.531260 | orchestrator | Wednesday 10 September 2025 00:26:49 +0000 (0:00:00.265) 0:01:06.358 *** 2025-09-10 00:29:10.531381 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:10.531398 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:10.531411 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:10.531422 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:10.531432 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:10.531443 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:10.531454 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:10.531533 | orchestrator | 2025-09-10 00:29:10.531546 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-10 00:29:10.531558 | orchestrator | Wednesday 10 September 2025 00:26:50 +0000 (0:00:01.291) 0:01:07.649 *** 2025-09-10 00:29:10.531569 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:10.531581 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:10.531593 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:10.531603 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:10.531614 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:10.531625 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:10.531636 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:10.531647 | orchestrator | 2025-09-10 00:29:10.531658 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-10 00:29:10.531670 | orchestrator | Wednesday 10 September 2025 00:26:52 +0000 (0:00:01.756) 0:01:09.406 *** 2025-09-10 00:29:10.531684 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:10.531702 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:10.531721 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:10.531738 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:10.531778 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:10.531797 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:10.531817 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:10.531836 | orchestrator | 2025-09-10 00:29:10.531853 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-10 00:29:10.531867 | orchestrator | Wednesday 10 September 2025 00:26:54 +0000 (0:00:02.516) 0:01:11.922 *** 2025-09-10 00:29:10.531880 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:10.531892 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:10.531902 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:10.531913 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:10.531924 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:10.531934 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:10.531945 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:10.531956 | orchestrator | 2025-09-10 00:29:10.531967 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-10 00:29:10.532004 | orchestrator | Wednesday 10 September 2025 00:27:32 +0000 (0:00:37.381) 0:01:49.304 *** 2025-09-10 00:29:10.532018 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:10.532036 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:10.532059 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:10.532083 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:10.532100 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:10.532116 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:10.532134 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:10.532149 | orchestrator | 2025-09-10 00:29:10.532174 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-10 00:29:10.532191 | orchestrator | Wednesday 10 September 2025 00:28:50 +0000 (0:01:18.656) 0:03:07.960 *** 2025-09-10 00:29:10.532210 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:10.532227 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:10.532246 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:10.532265 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:10.532283 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:10.532297 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:10.532308 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:10.532319 | orchestrator | 2025-09-10 00:29:10.532330 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-10 00:29:10.532342 | orchestrator | Wednesday 10 September 2025 00:28:52 +0000 (0:00:01.648) 0:03:09.609 *** 2025-09-10 00:29:10.532353 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:10.532363 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:10.532374 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:10.532385 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:10.532395 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:10.532406 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:10.532416 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:10.532427 | orchestrator | 2025-09-10 00:29:10.532438 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-10 00:29:10.532449 | orchestrator | Wednesday 10 September 2025 00:29:04 +0000 (0:00:12.416) 0:03:22.025 *** 2025-09-10 00:29:10.532495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-10 00:29:10.532521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-10 00:29:10.532561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-10 00:29:10.532575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-10 00:29:10.532599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-10 00:29:10.532610 | orchestrator | 2025-09-10 00:29:10.532621 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-10 00:29:10.532632 | orchestrator | Wednesday 10 September 2025 00:29:05 +0000 (0:00:00.395) 0:03:22.420 *** 2025-09-10 00:29:10.532643 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-10 00:29:10.532654 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:10.532665 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-10 00:29:10.532676 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:29:10.532687 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-10 00:29:10.532698 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:29:10.532708 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-10 00:29:10.532719 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:29:10.532730 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-10 00:29:10.532741 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-10 00:29:10.532751 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-10 00:29:10.532762 | orchestrator | 2025-09-10 00:29:10.532773 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-10 00:29:10.532789 | orchestrator | Wednesday 10 September 2025 00:29:05 +0000 (0:00:00.645) 0:03:23.065 *** 2025-09-10 00:29:10.532803 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-10 00:29:10.532827 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-10 00:29:10.532853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-10 00:29:10.532870 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-10 00:29:10.532888 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-10 00:29:10.532904 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-10 00:29:10.532922 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-10 00:29:10.532940 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-10 00:29:10.532959 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-10 00:29:10.532974 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-10 00:29:10.532986 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:10.532997 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-10 00:29:10.533007 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-10 00:29:10.533018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-10 00:29:10.533029 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-10 00:29:10.533039 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-10 00:29:10.533050 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-10 00:29:10.533070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-10 00:29:10.533081 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-10 00:29:10.533092 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-10 00:29:10.533103 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-10 00:29:10.533123 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-10 00:29:13.749277 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-10 00:29:13.749388 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-10 00:29:13.749403 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:29:13.749416 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-10 00:29:13.749429 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-10 00:29:13.749440 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-10 00:29:13.749451 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-10 00:29:13.749507 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-10 00:29:13.749520 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-10 00:29:13.749531 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-10 00:29:13.749541 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-10 00:29:13.749552 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-10 00:29:13.749563 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-10 00:29:13.749574 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-10 00:29:13.749585 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:29:13.749595 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-10 00:29:13.749606 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-10 00:29:13.749617 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-10 00:29:13.749628 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-10 00:29:13.749639 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-10 00:29:13.749736 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-10 00:29:13.749752 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:29:13.749763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-10 00:29:13.749774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-10 00:29:13.749785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-10 00:29:13.749795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-10 00:29:13.749808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-10 00:29:13.749821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-10 00:29:13.749834 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-10 00:29:13.749874 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-10 00:29:13.749887 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-10 00:29:13.749899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-10 00:29:13.749911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-10 00:29:13.749923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-10 00:29:13.749936 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-10 00:29:13.749948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-10 00:29:13.749960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-10 00:29:13.749972 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-10 00:29:13.749984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-10 00:29:13.749997 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-10 00:29:13.750010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-10 00:29:13.750082 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-10 00:29:13.750095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-10 00:29:13.750129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-10 00:29:13.750143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-10 00:29:13.750156 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-10 00:29:13.750167 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-10 00:29:13.750178 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-10 00:29:13.750189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-10 00:29:13.750200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-10 00:29:13.750211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-10 00:29:13.750221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-10 00:29:13.750232 | orchestrator | 2025-09-10 00:29:13.750244 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-10 00:29:13.750255 | orchestrator | Wednesday 10 September 2025 00:29:10 +0000 (0:00:04.715) 0:03:27.781 *** 2025-09-10 00:29:13.750265 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750276 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750287 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750308 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750319 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750329 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-10 00:29:13.750340 | orchestrator | 2025-09-10 00:29:13.750351 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-10 00:29:13.750371 | orchestrator | Wednesday 10 September 2025 00:29:11 +0000 (0:00:00.615) 0:03:28.396 *** 2025-09-10 00:29:13.750382 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-10 00:29:13.750393 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:13.750413 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-10 00:29:13.750425 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:29:13.750436 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-10 00:29:13.750447 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:29:13.750458 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-10 00:29:13.750488 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:29:13.750499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-10 00:29:13.750510 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-10 00:29:13.750521 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-10 00:29:13.750532 | orchestrator | 2025-09-10 00:29:13.750543 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-10 00:29:13.750554 | orchestrator | Wednesday 10 September 2025 00:29:11 +0000 (0:00:00.619) 0:03:29.015 *** 2025-09-10 00:29:13.750564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-10 00:29:13.750575 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:13.750586 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-10 00:29:13.750597 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-10 00:29:13.750607 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:29:13.750618 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:29:13.750629 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-10 00:29:13.750640 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:29:13.750650 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-10 00:29:13.750661 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-10 00:29:13.750672 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-10 00:29:13.750682 | orchestrator | 2025-09-10 00:29:13.750693 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-10 00:29:13.750704 | orchestrator | Wednesday 10 September 2025 00:29:13 +0000 (0:00:01.684) 0:03:30.700 *** 2025-09-10 00:29:13.750714 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:13.750725 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:29:13.750736 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:29:13.750747 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:29:13.750757 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:29:13.750775 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:29:26.589852 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:29:26.589996 | orchestrator | 2025-09-10 00:29:26.590055 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-10 00:29:26.590848 | orchestrator | Wednesday 10 September 2025 00:29:13 +0000 (0:00:00.309) 0:03:31.010 *** 2025-09-10 00:29:26.590867 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:26.590883 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:26.590894 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:26.590906 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:26.590941 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:26.590952 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:26.590963 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:26.590974 | orchestrator | 2025-09-10 00:29:26.590985 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-10 00:29:26.590996 | orchestrator | Wednesday 10 September 2025 00:29:19 +0000 (0:00:05.942) 0:03:36.953 *** 2025-09-10 00:29:26.591007 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-10 00:29:26.591018 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-10 00:29:26.591029 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:26.591040 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-10 00:29:26.591050 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:29:26.591061 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-10 00:29:26.591071 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:29:26.591082 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:29:26.591093 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-10 00:29:26.591103 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-10 00:29:26.591114 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:29:26.591129 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:29:26.591140 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-10 00:29:26.591151 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:29:26.591162 | orchestrator | 2025-09-10 00:29:26.591173 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-10 00:29:26.591184 | orchestrator | Wednesday 10 September 2025 00:29:20 +0000 (0:00:00.321) 0:03:37.275 *** 2025-09-10 00:29:26.591195 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-10 00:29:26.591205 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-10 00:29:26.591216 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-10 00:29:26.591227 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-10 00:29:26.591238 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-10 00:29:26.591248 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-10 00:29:26.591259 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-10 00:29:26.591270 | orchestrator | 2025-09-10 00:29:26.591281 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-10 00:29:26.591291 | orchestrator | Wednesday 10 September 2025 00:29:21 +0000 (0:00:01.883) 0:03:39.158 *** 2025-09-10 00:29:26.591319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:29:26.591333 | orchestrator | 2025-09-10 00:29:26.591344 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-10 00:29:26.591355 | orchestrator | Wednesday 10 September 2025 00:29:22 +0000 (0:00:00.514) 0:03:39.673 *** 2025-09-10 00:29:26.591366 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:26.591377 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:26.591387 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:26.591398 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:26.591409 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:26.591419 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:26.591430 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:26.591441 | orchestrator | 2025-09-10 00:29:26.591451 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-10 00:29:26.591498 | orchestrator | Wednesday 10 September 2025 00:29:23 +0000 (0:00:01.187) 0:03:40.860 *** 2025-09-10 00:29:26.591511 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:26.591522 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:26.591533 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:26.591543 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:26.591554 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:26.591565 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:26.591584 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:26.591595 | orchestrator | 2025-09-10 00:29:26.591606 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-10 00:29:26.591616 | orchestrator | Wednesday 10 September 2025 00:29:24 +0000 (0:00:00.592) 0:03:41.453 *** 2025-09-10 00:29:26.591627 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:26.591638 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:26.591649 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:26.591659 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:26.591670 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:26.591681 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:26.591691 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:26.591702 | orchestrator | 2025-09-10 00:29:26.591713 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-10 00:29:26.591723 | orchestrator | Wednesday 10 September 2025 00:29:24 +0000 (0:00:00.609) 0:03:42.062 *** 2025-09-10 00:29:26.591734 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:26.591745 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:26.591756 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:26.591766 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:26.591777 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:26.591788 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:26.591798 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:26.591809 | orchestrator | 2025-09-10 00:29:26.591820 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-10 00:29:26.591831 | orchestrator | Wednesday 10 September 2025 00:29:25 +0000 (0:00:00.616) 0:03:42.679 *** 2025-09-10 00:29:26.591867 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462662.0228744, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591884 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462680.6999528, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591897 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462685.3377442, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591913 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462678.767943, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591925 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462685.1806521, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591944 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462690.2125318, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591956 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757462685.2886233, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:26.591986 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.859911 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.860011 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.860029 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.860042 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.860081 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.860090 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 00:29:42.860098 | orchestrator | 2025-09-10 00:29:42.860107 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-10 00:29:42.860116 | orchestrator | Wednesday 10 September 2025 00:29:26 +0000 (0:00:01.162) 0:03:43.841 *** 2025-09-10 00:29:42.860124 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:42.860132 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:42.860140 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:42.860147 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:42.860154 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:42.860161 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:42.860168 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:42.860175 | orchestrator | 2025-09-10 00:29:42.860183 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-10 00:29:42.860190 | orchestrator | Wednesday 10 September 2025 00:29:27 +0000 (0:00:01.166) 0:03:45.007 *** 2025-09-10 00:29:42.860198 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:42.860205 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:42.860212 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:42.860219 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:42.860241 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:42.860249 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:42.860257 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:42.860264 | orchestrator | 2025-09-10 00:29:42.860271 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-10 00:29:42.860278 | orchestrator | Wednesday 10 September 2025 00:29:28 +0000 (0:00:01.162) 0:03:46.170 *** 2025-09-10 00:29:42.860286 | orchestrator | changed: [testbed-manager] 2025-09-10 00:29:42.860308 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:42.860315 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:42.860323 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:42.860335 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:42.860347 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:42.860358 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:42.860369 | orchestrator | 2025-09-10 00:29:42.860381 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-10 00:29:42.860393 | orchestrator | Wednesday 10 September 2025 00:29:30 +0000 (0:00:01.121) 0:03:47.291 *** 2025-09-10 00:29:42.860413 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:29:42.860426 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:29:42.860438 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:29:42.860450 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:29:42.860462 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:29:42.860525 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:29:42.860536 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:29:42.860548 | orchestrator | 2025-09-10 00:29:42.860560 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-10 00:29:42.860573 | orchestrator | Wednesday 10 September 2025 00:29:30 +0000 (0:00:00.285) 0:03:47.577 *** 2025-09-10 00:29:42.860584 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.860597 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:42.860609 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:42.860620 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:42.860632 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:42.860644 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:42.860655 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:42.860668 | orchestrator | 2025-09-10 00:29:42.860679 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-10 00:29:42.860691 | orchestrator | Wednesday 10 September 2025 00:29:31 +0000 (0:00:00.747) 0:03:48.324 *** 2025-09-10 00:29:42.860712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:29:42.860727 | orchestrator | 2025-09-10 00:29:42.860739 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-10 00:29:42.860751 | orchestrator | Wednesday 10 September 2025 00:29:31 +0000 (0:00:00.399) 0:03:48.724 *** 2025-09-10 00:29:42.860763 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.860775 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:29:42.860787 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:29:42.860799 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:29:42.860811 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:29:42.860822 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:29:42.860834 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:29:42.860845 | orchestrator | 2025-09-10 00:29:42.860857 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-10 00:29:42.860869 | orchestrator | Wednesday 10 September 2025 00:29:39 +0000 (0:00:08.087) 0:03:56.811 *** 2025-09-10 00:29:42.860880 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.860892 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:42.860903 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:42.860915 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:42.860928 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:42.860939 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:42.860951 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:42.860963 | orchestrator | 2025-09-10 00:29:42.860975 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-10 00:29:42.860986 | orchestrator | Wednesday 10 September 2025 00:29:40 +0000 (0:00:01.160) 0:03:57.972 *** 2025-09-10 00:29:42.860997 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.861008 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:42.861020 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:42.861032 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:42.861044 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:42.861055 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:42.861067 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:42.861079 | orchestrator | 2025-09-10 00:29:42.861091 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-10 00:29:42.861103 | orchestrator | Wednesday 10 September 2025 00:29:41 +0000 (0:00:01.072) 0:03:59.045 *** 2025-09-10 00:29:42.861114 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.861135 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:42.861147 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:42.861159 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:42.861171 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:42.861183 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:42.861196 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:42.861207 | orchestrator | 2025-09-10 00:29:42.861220 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-10 00:29:42.861233 | orchestrator | Wednesday 10 September 2025 00:29:42 +0000 (0:00:00.307) 0:03:59.352 *** 2025-09-10 00:29:42.861245 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.861258 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:42.861270 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:42.861281 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:42.861293 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:42.861304 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:29:42.861316 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:29:42.861328 | orchestrator | 2025-09-10 00:29:42.861340 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-10 00:29:42.861352 | orchestrator | Wednesday 10 September 2025 00:29:42 +0000 (0:00:00.443) 0:03:59.795 *** 2025-09-10 00:29:42.861364 | orchestrator | ok: [testbed-manager] 2025-09-10 00:29:42.861376 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:29:42.861388 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:29:42.861401 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:29:42.861413 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:29:42.861435 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:30:52.722651 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:30:52.722769 | orchestrator | 2025-09-10 00:30:52.722786 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-10 00:30:52.722799 | orchestrator | Wednesday 10 September 2025 00:29:42 +0000 (0:00:00.322) 0:04:00.118 *** 2025-09-10 00:30:52.722811 | orchestrator | ok: [testbed-manager] 2025-09-10 00:30:52.722822 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:30:52.722832 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:30:52.722843 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:30:52.722854 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:30:52.722864 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:30:52.722874 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:30:52.722885 | orchestrator | 2025-09-10 00:30:52.722896 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-10 00:30:52.722908 | orchestrator | Wednesday 10 September 2025 00:29:48 +0000 (0:00:05.542) 0:04:05.660 *** 2025-09-10 00:30:52.722920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:30:52.722935 | orchestrator | 2025-09-10 00:30:52.722946 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-10 00:30:52.722957 | orchestrator | Wednesday 10 September 2025 00:29:48 +0000 (0:00:00.395) 0:04:06.056 *** 2025-09-10 00:30:52.722969 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.722980 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-10 00:30:52.722991 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:30:52.723001 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.723012 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-10 00:30:52.723023 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.723033 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:30:52.723044 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-10 00:30:52.723055 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.723066 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-10 00:30:52.723076 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:30:52.723114 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:30:52.723127 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.723155 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-10 00:30:52.723168 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:30:52.723181 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.723193 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-10 00:30:52.723204 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:30:52.723217 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-10 00:30:52.723229 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-10 00:30:52.723242 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:30:52.723254 | orchestrator | 2025-09-10 00:30:52.723266 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-10 00:30:52.723279 | orchestrator | Wednesday 10 September 2025 00:29:49 +0000 (0:00:00.345) 0:04:06.402 *** 2025-09-10 00:30:52.723291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:30:52.723304 | orchestrator | 2025-09-10 00:30:52.723317 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-10 00:30:52.723330 | orchestrator | Wednesday 10 September 2025 00:29:49 +0000 (0:00:00.412) 0:04:06.814 *** 2025-09-10 00:30:52.723342 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-10 00:30:52.723355 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:30:52.723367 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-10 00:30:52.723380 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:30:52.723392 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-10 00:30:52.723404 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-10 00:30:52.723416 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:30:52.723428 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-10 00:30:52.723440 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:30:52.723452 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-10 00:30:52.723465 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:30:52.723478 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:30:52.723512 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-10 00:30:52.723523 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:30:52.723534 | orchestrator | 2025-09-10 00:30:52.723545 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-10 00:30:52.723556 | orchestrator | Wednesday 10 September 2025 00:29:49 +0000 (0:00:00.314) 0:04:07.128 *** 2025-09-10 00:30:52.723567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:30:52.723579 | orchestrator | 2025-09-10 00:30:52.723590 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-10 00:30:52.723601 | orchestrator | Wednesday 10 September 2025 00:29:50 +0000 (0:00:00.416) 0:04:07.545 *** 2025-09-10 00:30:52.723612 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:30:52.723640 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:30:52.723651 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:30:52.723662 | orchestrator | changed: [testbed-manager] 2025-09-10 00:30:52.723673 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:30:52.723684 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:30:52.723695 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:30:52.723705 | orchestrator | 2025-09-10 00:30:52.723716 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-10 00:30:52.723736 | orchestrator | Wednesday 10 September 2025 00:30:25 +0000 (0:00:35.456) 0:04:43.001 *** 2025-09-10 00:30:52.723747 | orchestrator | changed: [testbed-manager] 2025-09-10 00:30:52.723758 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:30:52.723768 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:30:52.723779 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:30:52.723789 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:30:52.723800 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:30:52.723811 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:30:52.723821 | orchestrator | 2025-09-10 00:30:52.723832 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-10 00:30:52.723843 | orchestrator | Wednesday 10 September 2025 00:30:33 +0000 (0:00:08.097) 0:04:51.099 *** 2025-09-10 00:30:52.723854 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:30:52.723864 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:30:52.723875 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:30:52.723885 | orchestrator | changed: [testbed-manager] 2025-09-10 00:30:52.723896 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:30:52.723906 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:30:52.723917 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:30:52.723928 | orchestrator | 2025-09-10 00:30:52.723938 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-10 00:30:52.723949 | orchestrator | Wednesday 10 September 2025 00:30:41 +0000 (0:00:07.734) 0:04:58.834 *** 2025-09-10 00:30:52.723960 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:30:52.723971 | orchestrator | ok: [testbed-manager] 2025-09-10 00:30:52.723982 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:30:52.723992 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:30:52.724003 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:30:52.724014 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:30:52.724024 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:30:52.724035 | orchestrator | 2025-09-10 00:30:52.724046 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-10 00:30:52.724057 | orchestrator | Wednesday 10 September 2025 00:30:43 +0000 (0:00:01.562) 0:05:00.396 *** 2025-09-10 00:30:52.724068 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:30:52.724079 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:30:52.724094 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:30:52.724105 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:30:52.724116 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:30:52.724127 | orchestrator | changed: [testbed-manager] 2025-09-10 00:30:52.724137 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:30:52.724148 | orchestrator | 2025-09-10 00:30:52.724159 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-10 00:30:52.724169 | orchestrator | Wednesday 10 September 2025 00:30:48 +0000 (0:00:05.584) 0:05:05.981 *** 2025-09-10 00:30:52.724181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:30:52.724193 | orchestrator | 2025-09-10 00:30:52.724204 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-10 00:30:52.724215 | orchestrator | Wednesday 10 September 2025 00:30:49 +0000 (0:00:00.591) 0:05:06.572 *** 2025-09-10 00:30:52.724226 | orchestrator | changed: [testbed-manager] 2025-09-10 00:30:52.724236 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:30:52.724247 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:30:52.724258 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:30:52.724268 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:30:52.724279 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:30:52.724289 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:30:52.724300 | orchestrator | 2025-09-10 00:30:52.724310 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-10 00:30:52.724328 | orchestrator | Wednesday 10 September 2025 00:30:50 +0000 (0:00:00.735) 0:05:07.308 *** 2025-09-10 00:30:52.724339 | orchestrator | ok: [testbed-manager] 2025-09-10 00:30:52.724350 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:30:52.724361 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:30:52.724372 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:30:52.724382 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:30:52.724393 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:30:52.724404 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:30:52.724414 | orchestrator | 2025-09-10 00:30:52.724425 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-10 00:30:52.724436 | orchestrator | Wednesday 10 September 2025 00:30:51 +0000 (0:00:01.581) 0:05:08.889 *** 2025-09-10 00:30:52.724446 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:30:52.724457 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:30:52.724468 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:30:52.724478 | orchestrator | changed: [testbed-manager] 2025-09-10 00:30:52.724519 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:30:52.724531 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:30:52.724541 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:30:52.724552 | orchestrator | 2025-09-10 00:30:52.724563 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-10 00:30:52.724574 | orchestrator | Wednesday 10 September 2025 00:30:52 +0000 (0:00:00.810) 0:05:09.700 *** 2025-09-10 00:30:52.724584 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:30:52.724595 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:30:52.724606 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:30:52.724616 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:30:52.724627 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:30:52.724637 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:30:52.724648 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:30:52.724659 | orchestrator | 2025-09-10 00:30:52.724669 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-10 00:30:52.724687 | orchestrator | Wednesday 10 September 2025 00:30:52 +0000 (0:00:00.277) 0:05:09.977 *** 2025-09-10 00:31:20.670153 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:31:20.670277 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:31:20.670293 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:31:20.670305 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:31:20.670317 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:31:20.670328 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:31:20.670339 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:31:20.670351 | orchestrator | 2025-09-10 00:31:20.670364 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-10 00:31:20.670376 | orchestrator | Wednesday 10 September 2025 00:30:53 +0000 (0:00:00.442) 0:05:10.419 *** 2025-09-10 00:31:20.670387 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.670399 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:31:20.670410 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:31:20.670421 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:31:20.670431 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:31:20.670442 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:31:20.670453 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:31:20.670463 | orchestrator | 2025-09-10 00:31:20.670475 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-10 00:31:20.670485 | orchestrator | Wednesday 10 September 2025 00:30:53 +0000 (0:00:00.324) 0:05:10.743 *** 2025-09-10 00:31:20.670540 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:31:20.670552 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:31:20.670563 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:31:20.670574 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:31:20.670585 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:31:20.670596 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:31:20.670607 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:31:20.670644 | orchestrator | 2025-09-10 00:31:20.670658 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-10 00:31:20.670672 | orchestrator | Wednesday 10 September 2025 00:30:53 +0000 (0:00:00.302) 0:05:11.046 *** 2025-09-10 00:31:20.670684 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.670696 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:31:20.670708 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:31:20.670720 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:31:20.670733 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:31:20.670745 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:31:20.670758 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:31:20.670770 | orchestrator | 2025-09-10 00:31:20.670782 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-10 00:31:20.670794 | orchestrator | Wednesday 10 September 2025 00:30:54 +0000 (0:00:00.320) 0:05:11.366 *** 2025-09-10 00:31:20.670807 | orchestrator | ok: [testbed-manager] =>  2025-09-10 00:31:20.670819 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670832 | orchestrator | ok: [testbed-node-0] =>  2025-09-10 00:31:20.670844 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670856 | orchestrator | ok: [testbed-node-1] =>  2025-09-10 00:31:20.670868 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670881 | orchestrator | ok: [testbed-node-2] =>  2025-09-10 00:31:20.670893 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670906 | orchestrator | ok: [testbed-node-3] =>  2025-09-10 00:31:20.670919 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670931 | orchestrator | ok: [testbed-node-4] =>  2025-09-10 00:31:20.670943 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670957 | orchestrator | ok: [testbed-node-5] =>  2025-09-10 00:31:20.670969 | orchestrator |  docker_version: 5:27.5.1 2025-09-10 00:31:20.670981 | orchestrator | 2025-09-10 00:31:20.670992 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-10 00:31:20.671003 | orchestrator | Wednesday 10 September 2025 00:30:54 +0000 (0:00:00.297) 0:05:11.664 *** 2025-09-10 00:31:20.671014 | orchestrator | ok: [testbed-manager] =>  2025-09-10 00:31:20.671025 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671035 | orchestrator | ok: [testbed-node-0] =>  2025-09-10 00:31:20.671046 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671057 | orchestrator | ok: [testbed-node-1] =>  2025-09-10 00:31:20.671068 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671078 | orchestrator | ok: [testbed-node-2] =>  2025-09-10 00:31:20.671089 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671100 | orchestrator | ok: [testbed-node-3] =>  2025-09-10 00:31:20.671110 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671121 | orchestrator | ok: [testbed-node-4] =>  2025-09-10 00:31:20.671132 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671142 | orchestrator | ok: [testbed-node-5] =>  2025-09-10 00:31:20.671153 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-10 00:31:20.671163 | orchestrator | 2025-09-10 00:31:20.671174 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-10 00:31:20.671185 | orchestrator | Wednesday 10 September 2025 00:30:54 +0000 (0:00:00.309) 0:05:11.973 *** 2025-09-10 00:31:20.671195 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:31:20.671206 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:31:20.671217 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:31:20.671227 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:31:20.671238 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:31:20.671249 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:31:20.671259 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:31:20.671270 | orchestrator | 2025-09-10 00:31:20.671281 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-10 00:31:20.671292 | orchestrator | Wednesday 10 September 2025 00:30:54 +0000 (0:00:00.276) 0:05:12.250 *** 2025-09-10 00:31:20.671302 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:31:20.671323 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:31:20.671334 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:31:20.671344 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:31:20.671355 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:31:20.671366 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:31:20.671376 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:31:20.671387 | orchestrator | 2025-09-10 00:31:20.671398 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-10 00:31:20.671408 | orchestrator | Wednesday 10 September 2025 00:30:55 +0000 (0:00:00.302) 0:05:12.552 *** 2025-09-10 00:31:20.671439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:31:20.671453 | orchestrator | 2025-09-10 00:31:20.671464 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-10 00:31:20.671475 | orchestrator | Wednesday 10 September 2025 00:30:55 +0000 (0:00:00.418) 0:05:12.971 *** 2025-09-10 00:31:20.671486 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.671522 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:31:20.671533 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:31:20.671544 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:31:20.671555 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:31:20.671566 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:31:20.671576 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:31:20.671587 | orchestrator | 2025-09-10 00:31:20.671598 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-10 00:31:20.671609 | orchestrator | Wednesday 10 September 2025 00:30:56 +0000 (0:00:00.891) 0:05:13.863 *** 2025-09-10 00:31:20.671620 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:31:20.671630 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.671661 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:31:20.671672 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:31:20.671683 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:31:20.671693 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:31:20.671704 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:31:20.671714 | orchestrator | 2025-09-10 00:31:20.671725 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-10 00:31:20.671737 | orchestrator | Wednesday 10 September 2025 00:30:59 +0000 (0:00:03.329) 0:05:17.192 *** 2025-09-10 00:31:20.671748 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-10 00:31:20.671759 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-10 00:31:20.671770 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-10 00:31:20.671781 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-10 00:31:20.671791 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-10 00:31:20.671802 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-10 00:31:20.671813 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:31:20.671823 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-10 00:31:20.671834 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-10 00:31:20.671845 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-10 00:31:20.671855 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:31:20.671866 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-10 00:31:20.671881 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-10 00:31:20.671892 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-10 00:31:20.671903 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:31:20.671913 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-10 00:31:20.671924 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-10 00:31:20.671935 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-10 00:31:20.671953 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:31:20.671964 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-10 00:31:20.671974 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-10 00:31:20.671985 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:31:20.671995 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-10 00:31:20.672006 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:31:20.672017 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-10 00:31:20.672027 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-10 00:31:20.672038 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-10 00:31:20.672048 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:31:20.672059 | orchestrator | 2025-09-10 00:31:20.672070 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-10 00:31:20.672080 | orchestrator | Wednesday 10 September 2025 00:31:00 +0000 (0:00:00.636) 0:05:17.829 *** 2025-09-10 00:31:20.672091 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.672102 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:31:20.672112 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:31:20.672123 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:31:20.672134 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:31:20.672144 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:31:20.672155 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:31:20.672165 | orchestrator | 2025-09-10 00:31:20.672176 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-10 00:31:20.672187 | orchestrator | Wednesday 10 September 2025 00:31:07 +0000 (0:00:07.398) 0:05:25.228 *** 2025-09-10 00:31:20.672197 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:31:20.672208 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:31:20.672219 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.672229 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:31:20.672240 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:31:20.672250 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:31:20.672261 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:31:20.672271 | orchestrator | 2025-09-10 00:31:20.672282 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-10 00:31:20.672293 | orchestrator | Wednesday 10 September 2025 00:31:09 +0000 (0:00:01.256) 0:05:26.484 *** 2025-09-10 00:31:20.672304 | orchestrator | ok: [testbed-manager] 2025-09-10 00:31:20.672314 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:31:20.672325 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:31:20.672335 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:31:20.672346 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:31:20.672356 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:31:20.672367 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:31:20.672377 | orchestrator | 2025-09-10 00:31:20.672388 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-10 00:31:20.672398 | orchestrator | Wednesday 10 September 2025 00:31:17 +0000 (0:00:08.156) 0:05:34.640 *** 2025-09-10 00:31:20.672409 | orchestrator | changed: [testbed-manager] 2025-09-10 00:31:20.672420 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:31:20.672430 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:31:20.672449 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.809739 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.809873 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.809889 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.809901 | orchestrator | 2025-09-10 00:32:04.809914 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-10 00:32:04.809927 | orchestrator | Wednesday 10 September 2025 00:31:20 +0000 (0:00:03.280) 0:05:37.921 *** 2025-09-10 00:32:04.809939 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.809956 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.809977 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.810102 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.810127 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.810156 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.810175 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.810195 | orchestrator | 2025-09-10 00:32:04.810211 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-10 00:32:04.810222 | orchestrator | Wednesday 10 September 2025 00:31:21 +0000 (0:00:01.259) 0:05:39.180 *** 2025-09-10 00:32:04.810233 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.810244 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.810256 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.810269 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.810281 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.810293 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.810306 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.810318 | orchestrator | 2025-09-10 00:32:04.810331 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-10 00:32:04.810344 | orchestrator | Wednesday 10 September 2025 00:31:23 +0000 (0:00:01.281) 0:05:40.461 *** 2025-09-10 00:32:04.810356 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.810369 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.810381 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.810393 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.810405 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.810419 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.810431 | orchestrator | changed: [testbed-manager] 2025-09-10 00:32:04.810444 | orchestrator | 2025-09-10 00:32:04.810456 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-10 00:32:04.810469 | orchestrator | Wednesday 10 September 2025 00:31:23 +0000 (0:00:00.779) 0:05:41.241 *** 2025-09-10 00:32:04.810481 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.810495 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.810533 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.810553 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.810582 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.810595 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.810607 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.810618 | orchestrator | 2025-09-10 00:32:04.810629 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-10 00:32:04.810640 | orchestrator | Wednesday 10 September 2025 00:31:34 +0000 (0:00:10.150) 0:05:51.391 *** 2025-09-10 00:32:04.810650 | orchestrator | changed: [testbed-manager] 2025-09-10 00:32:04.810661 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.810671 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.810682 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.810693 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.810703 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.810714 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.810724 | orchestrator | 2025-09-10 00:32:04.810735 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-10 00:32:04.810746 | orchestrator | Wednesday 10 September 2025 00:31:35 +0000 (0:00:00.904) 0:05:52.296 *** 2025-09-10 00:32:04.810757 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.810767 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.810778 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.810788 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.810799 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.810810 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.810820 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.810831 | orchestrator | 2025-09-10 00:32:04.810841 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-10 00:32:04.810852 | orchestrator | Wednesday 10 September 2025 00:31:43 +0000 (0:00:08.686) 0:06:00.982 *** 2025-09-10 00:32:04.810874 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.810885 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.810896 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.810906 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.810917 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.810927 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.810938 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.810949 | orchestrator | 2025-09-10 00:32:04.810959 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-10 00:32:04.810970 | orchestrator | Wednesday 10 September 2025 00:31:54 +0000 (0:00:11.150) 0:06:12.133 *** 2025-09-10 00:32:04.810983 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-10 00:32:04.811003 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-10 00:32:04.811020 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-10 00:32:04.811038 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-10 00:32:04.811055 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-10 00:32:04.811073 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-10 00:32:04.811093 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-10 00:32:04.811111 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-10 00:32:04.811130 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-10 00:32:04.811142 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-10 00:32:04.811152 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-10 00:32:04.811163 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-10 00:32:04.811174 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-10 00:32:04.811185 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-10 00:32:04.811196 | orchestrator | 2025-09-10 00:32:04.811213 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-10 00:32:04.811256 | orchestrator | Wednesday 10 September 2025 00:31:56 +0000 (0:00:01.186) 0:06:13.320 *** 2025-09-10 00:32:04.811276 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:04.811294 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.811310 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.811326 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.811343 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.811359 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.811375 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.811391 | orchestrator | 2025-09-10 00:32:04.811407 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-10 00:32:04.811423 | orchestrator | Wednesday 10 September 2025 00:31:56 +0000 (0:00:00.556) 0:06:13.877 *** 2025-09-10 00:32:04.811439 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.811454 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:04.811471 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:04.811488 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:04.811528 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:04.811545 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:04.811562 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:04.811580 | orchestrator | 2025-09-10 00:32:04.811597 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-10 00:32:04.811615 | orchestrator | Wednesday 10 September 2025 00:32:00 +0000 (0:00:03.805) 0:06:17.682 *** 2025-09-10 00:32:04.811633 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:04.811651 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.811668 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.811684 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.811703 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.811723 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.811742 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.811776 | orchestrator | 2025-09-10 00:32:04.811795 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-10 00:32:04.811813 | orchestrator | Wednesday 10 September 2025 00:32:00 +0000 (0:00:00.492) 0:06:18.175 *** 2025-09-10 00:32:04.811831 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-10 00:32:04.811851 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-10 00:32:04.811870 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:04.811892 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-10 00:32:04.811919 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-10 00:32:04.811930 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.811941 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-10 00:32:04.811952 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-10 00:32:04.811963 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.811973 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-10 00:32:04.811984 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-10 00:32:04.811995 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.812006 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-10 00:32:04.812016 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-10 00:32:04.812027 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.812037 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-10 00:32:04.812048 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-10 00:32:04.812059 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.812069 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-10 00:32:04.812080 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-10 00:32:04.812090 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.812101 | orchestrator | 2025-09-10 00:32:04.812112 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-10 00:32:04.812122 | orchestrator | Wednesday 10 September 2025 00:32:01 +0000 (0:00:00.724) 0:06:18.899 *** 2025-09-10 00:32:04.812133 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:04.812144 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.812155 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.812165 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.812176 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.812187 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.812198 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.812208 | orchestrator | 2025-09-10 00:32:04.812219 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-10 00:32:04.812230 | orchestrator | Wednesday 10 September 2025 00:32:02 +0000 (0:00:00.498) 0:06:19.398 *** 2025-09-10 00:32:04.812241 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:04.812251 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.812262 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.812272 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.812283 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.812294 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.812305 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.812315 | orchestrator | 2025-09-10 00:32:04.812326 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-10 00:32:04.812345 | orchestrator | Wednesday 10 September 2025 00:32:02 +0000 (0:00:00.490) 0:06:19.889 *** 2025-09-10 00:32:04.812363 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:04.812381 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:04.812398 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:04.812414 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:04.812432 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:04.812461 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:04.812481 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:04.812500 | orchestrator | 2025-09-10 00:32:04.812555 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-10 00:32:04.812567 | orchestrator | Wednesday 10 September 2025 00:32:03 +0000 (0:00:00.505) 0:06:20.394 *** 2025-09-10 00:32:04.812578 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:04.812602 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.407094 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.407216 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:26.407232 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.407244 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:26.407255 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:26.407267 | orchestrator | 2025-09-10 00:32:26.407279 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-10 00:32:26.407292 | orchestrator | Wednesday 10 September 2025 00:32:04 +0000 (0:00:01.670) 0:06:22.064 *** 2025-09-10 00:32:26.407304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:32:26.407317 | orchestrator | 2025-09-10 00:32:26.407328 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-10 00:32:26.407340 | orchestrator | Wednesday 10 September 2025 00:32:05 +0000 (0:00:01.060) 0:06:23.125 *** 2025-09-10 00:32:26.407350 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.407362 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.407374 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.407385 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.407396 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.407407 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.407418 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.407429 | orchestrator | 2025-09-10 00:32:26.407440 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-10 00:32:26.407451 | orchestrator | Wednesday 10 September 2025 00:32:06 +0000 (0:00:00.848) 0:06:23.973 *** 2025-09-10 00:32:26.407462 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.407473 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.407484 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.407495 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.407507 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.407561 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.407573 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.407584 | orchestrator | 2025-09-10 00:32:26.407595 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-10 00:32:26.407606 | orchestrator | Wednesday 10 September 2025 00:32:07 +0000 (0:00:00.822) 0:06:24.796 *** 2025-09-10 00:32:26.407618 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.407631 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.407643 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.407673 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.407686 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.407699 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.407712 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.407724 | orchestrator | 2025-09-10 00:32:26.407736 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-10 00:32:26.407749 | orchestrator | Wednesday 10 September 2025 00:32:08 +0000 (0:00:01.317) 0:06:26.113 *** 2025-09-10 00:32:26.407759 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:26.407770 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.407781 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.407792 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.407803 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:26.407814 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:26.407852 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:26.407864 | orchestrator | 2025-09-10 00:32:26.407875 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-10 00:32:26.407886 | orchestrator | Wednesday 10 September 2025 00:32:10 +0000 (0:00:01.511) 0:06:27.624 *** 2025-09-10 00:32:26.407897 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.407907 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.407918 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.407929 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.407940 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.407951 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.407961 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.407972 | orchestrator | 2025-09-10 00:32:26.407982 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-10 00:32:26.407993 | orchestrator | Wednesday 10 September 2025 00:32:11 +0000 (0:00:01.339) 0:06:28.964 *** 2025-09-10 00:32:26.408004 | orchestrator | changed: [testbed-manager] 2025-09-10 00:32:26.408015 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.408040 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.408063 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.408074 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.408084 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.408095 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.408105 | orchestrator | 2025-09-10 00:32:26.408116 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-10 00:32:26.408127 | orchestrator | Wednesday 10 September 2025 00:32:13 +0000 (0:00:01.416) 0:06:30.381 *** 2025-09-10 00:32:26.408138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:32:26.408149 | orchestrator | 2025-09-10 00:32:26.408160 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-10 00:32:26.408171 | orchestrator | Wednesday 10 September 2025 00:32:14 +0000 (0:00:01.023) 0:06:31.404 *** 2025-09-10 00:32:26.408182 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.408192 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.408203 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.408214 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.408225 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:26.408236 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:26.408246 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:26.408257 | orchestrator | 2025-09-10 00:32:26.408268 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-10 00:32:26.408279 | orchestrator | Wednesday 10 September 2025 00:32:15 +0000 (0:00:01.316) 0:06:32.721 *** 2025-09-10 00:32:26.408290 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.408300 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.408331 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.408342 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.408353 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:26.408364 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:26.408375 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:26.408386 | orchestrator | 2025-09-10 00:32:26.408397 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-10 00:32:26.408408 | orchestrator | Wednesday 10 September 2025 00:32:16 +0000 (0:00:01.110) 0:06:33.832 *** 2025-09-10 00:32:26.408419 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.408429 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.408440 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.408451 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.408462 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:26.408472 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:26.408483 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:26.408494 | orchestrator | 2025-09-10 00:32:26.408505 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-10 00:32:26.408544 | orchestrator | Wednesday 10 September 2025 00:32:17 +0000 (0:00:01.098) 0:06:34.930 *** 2025-09-10 00:32:26.408555 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.408566 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.408577 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.408587 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.408598 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:26.408609 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:26.408619 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:26.408630 | orchestrator | 2025-09-10 00:32:26.408641 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-10 00:32:26.408651 | orchestrator | Wednesday 10 September 2025 00:32:18 +0000 (0:00:01.059) 0:06:35.990 *** 2025-09-10 00:32:26.408663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:32:26.408674 | orchestrator | 2025-09-10 00:32:26.408684 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408695 | orchestrator | Wednesday 10 September 2025 00:32:19 +0000 (0:00:01.044) 0:06:37.034 *** 2025-09-10 00:32:26.408706 | orchestrator | 2025-09-10 00:32:26.408717 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408728 | orchestrator | Wednesday 10 September 2025 00:32:19 +0000 (0:00:00.052) 0:06:37.087 *** 2025-09-10 00:32:26.408738 | orchestrator | 2025-09-10 00:32:26.408749 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408760 | orchestrator | Wednesday 10 September 2025 00:32:19 +0000 (0:00:00.038) 0:06:37.126 *** 2025-09-10 00:32:26.408771 | orchestrator | 2025-09-10 00:32:26.408782 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408792 | orchestrator | Wednesday 10 September 2025 00:32:19 +0000 (0:00:00.054) 0:06:37.180 *** 2025-09-10 00:32:26.408803 | orchestrator | 2025-09-10 00:32:26.408814 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408825 | orchestrator | Wednesday 10 September 2025 00:32:19 +0000 (0:00:00.054) 0:06:37.234 *** 2025-09-10 00:32:26.408835 | orchestrator | 2025-09-10 00:32:26.408846 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408857 | orchestrator | Wednesday 10 September 2025 00:32:20 +0000 (0:00:00.041) 0:06:37.276 *** 2025-09-10 00:32:26.408868 | orchestrator | 2025-09-10 00:32:26.408879 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-10 00:32:26.408890 | orchestrator | Wednesday 10 September 2025 00:32:20 +0000 (0:00:00.047) 0:06:37.323 *** 2025-09-10 00:32:26.408900 | orchestrator | 2025-09-10 00:32:26.408911 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-10 00:32:26.408922 | orchestrator | Wednesday 10 September 2025 00:32:20 +0000 (0:00:00.039) 0:06:37.363 *** 2025-09-10 00:32:26.408933 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:26.408943 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:26.408954 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:26.408965 | orchestrator | 2025-09-10 00:32:26.408976 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-10 00:32:26.408987 | orchestrator | Wednesday 10 September 2025 00:32:21 +0000 (0:00:01.172) 0:06:38.536 *** 2025-09-10 00:32:26.408998 | orchestrator | changed: [testbed-manager] 2025-09-10 00:32:26.409008 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.409019 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.409030 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.409040 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.409051 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.409070 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.409082 | orchestrator | 2025-09-10 00:32:26.409093 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-10 00:32:26.409110 | orchestrator | Wednesday 10 September 2025 00:32:22 +0000 (0:00:01.283) 0:06:39.819 *** 2025-09-10 00:32:26.409122 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:26.409133 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.409144 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.409154 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:26.409165 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.409176 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:26.409187 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:26.409198 | orchestrator | 2025-09-10 00:32:26.409209 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-10 00:32:26.409220 | orchestrator | Wednesday 10 September 2025 00:32:25 +0000 (0:00:02.723) 0:06:42.543 *** 2025-09-10 00:32:26.409231 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:26.409242 | orchestrator | 2025-09-10 00:32:26.409253 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-10 00:32:26.409264 | orchestrator | Wednesday 10 September 2025 00:32:25 +0000 (0:00:00.102) 0:06:42.645 *** 2025-09-10 00:32:26.409274 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:26.409285 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:26.409296 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:26.409307 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:26.409325 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:52.137199 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:52.137339 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:52.137395 | orchestrator | 2025-09-10 00:32:52.137411 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-10 00:32:52.137424 | orchestrator | Wednesday 10 September 2025 00:32:26 +0000 (0:00:01.012) 0:06:43.658 *** 2025-09-10 00:32:52.137437 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:52.137448 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:52.137459 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:52.137470 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:52.137481 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:52.137492 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:52.137539 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:52.137552 | orchestrator | 2025-09-10 00:32:52.137563 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-10 00:32:52.137574 | orchestrator | Wednesday 10 September 2025 00:32:26 +0000 (0:00:00.525) 0:06:44.183 *** 2025-09-10 00:32:52.137587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:32:52.137601 | orchestrator | 2025-09-10 00:32:52.137613 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-10 00:32:52.137625 | orchestrator | Wednesday 10 September 2025 00:32:27 +0000 (0:00:01.063) 0:06:45.247 *** 2025-09-10 00:32:52.137637 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.137648 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:52.137659 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:52.137670 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:52.137681 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:52.137692 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:52.137703 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:52.137714 | orchestrator | 2025-09-10 00:32:52.137725 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-10 00:32:52.137736 | orchestrator | Wednesday 10 September 2025 00:32:28 +0000 (0:00:00.846) 0:06:46.093 *** 2025-09-10 00:32:52.137747 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-10 00:32:52.137758 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-10 00:32:52.137769 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-10 00:32:52.137823 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-10 00:32:52.137834 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-10 00:32:52.137845 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-10 00:32:52.137856 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-10 00:32:52.137867 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-10 00:32:52.137878 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-10 00:32:52.137889 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-10 00:32:52.137899 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-10 00:32:52.137910 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-10 00:32:52.137921 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-10 00:32:52.137932 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-10 00:32:52.137942 | orchestrator | 2025-09-10 00:32:52.137953 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-10 00:32:52.137964 | orchestrator | Wednesday 10 September 2025 00:32:31 +0000 (0:00:02.390) 0:06:48.484 *** 2025-09-10 00:32:52.137975 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:52.137986 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:52.137997 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:52.138081 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:52.138091 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:52.138102 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:52.138113 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:52.138123 | orchestrator | 2025-09-10 00:32:52.138134 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-10 00:32:52.138145 | orchestrator | Wednesday 10 September 2025 00:32:31 +0000 (0:00:00.487) 0:06:48.971 *** 2025-09-10 00:32:52.138158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:32:52.138171 | orchestrator | 2025-09-10 00:32:52.138182 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-10 00:32:52.138193 | orchestrator | Wednesday 10 September 2025 00:32:32 +0000 (0:00:00.973) 0:06:49.945 *** 2025-09-10 00:32:52.138204 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.138216 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:52.138226 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:52.138237 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:52.138248 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:52.138258 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:52.138269 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:52.138280 | orchestrator | 2025-09-10 00:32:52.138291 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-10 00:32:52.138301 | orchestrator | Wednesday 10 September 2025 00:32:33 +0000 (0:00:00.805) 0:06:50.750 *** 2025-09-10 00:32:52.138313 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.138323 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:52.138334 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:52.138345 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:52.138355 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:52.138366 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:52.138377 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:52.138388 | orchestrator | 2025-09-10 00:32:52.138399 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-10 00:32:52.138429 | orchestrator | Wednesday 10 September 2025 00:32:34 +0000 (0:00:00.783) 0:06:51.533 *** 2025-09-10 00:32:52.138441 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:52.138452 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:52.138463 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:52.138474 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:52.138499 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:52.138510 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:52.138540 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:52.138552 | orchestrator | 2025-09-10 00:32:52.138563 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-10 00:32:52.138574 | orchestrator | Wednesday 10 September 2025 00:32:34 +0000 (0:00:00.501) 0:06:52.035 *** 2025-09-10 00:32:52.138585 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:52.138596 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.138607 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:52.138617 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:52.138628 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:52.138639 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:52.138649 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:52.138660 | orchestrator | 2025-09-10 00:32:52.138671 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-10 00:32:52.138682 | orchestrator | Wednesday 10 September 2025 00:32:36 +0000 (0:00:01.660) 0:06:53.695 *** 2025-09-10 00:32:52.138693 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:52.138703 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:52.138714 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:52.138725 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:52.138735 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:52.138746 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:52.138757 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:52.138767 | orchestrator | 2025-09-10 00:32:52.138778 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-10 00:32:52.138789 | orchestrator | Wednesday 10 September 2025 00:32:36 +0000 (0:00:00.493) 0:06:54.189 *** 2025-09-10 00:32:52.138800 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.138811 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:52.138821 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:52.138832 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:52.138859 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:52.138870 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:52.138881 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:52.138891 | orchestrator | 2025-09-10 00:32:52.138902 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-10 00:32:52.138919 | orchestrator | Wednesday 10 September 2025 00:32:44 +0000 (0:00:08.022) 0:07:02.212 *** 2025-09-10 00:32:52.138930 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.138941 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:52.138952 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:52.138963 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:52.138973 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:52.138984 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:52.138995 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:52.139005 | orchestrator | 2025-09-10 00:32:52.139017 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-10 00:32:52.139028 | orchestrator | Wednesday 10 September 2025 00:32:46 +0000 (0:00:01.306) 0:07:03.518 *** 2025-09-10 00:32:52.139038 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.139049 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:52.139060 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:52.139070 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:52.139081 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:52.139091 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:52.139102 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:52.139112 | orchestrator | 2025-09-10 00:32:52.139123 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-10 00:32:52.139134 | orchestrator | Wednesday 10 September 2025 00:32:47 +0000 (0:00:01.734) 0:07:05.252 *** 2025-09-10 00:32:52.139145 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.139163 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:32:52.139173 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:32:52.139184 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:32:52.139194 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:32:52.139205 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:32:52.139216 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:32:52.139226 | orchestrator | 2025-09-10 00:32:52.139237 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-10 00:32:52.139248 | orchestrator | Wednesday 10 September 2025 00:32:49 +0000 (0:00:01.866) 0:07:07.119 *** 2025-09-10 00:32:52.139259 | orchestrator | ok: [testbed-manager] 2025-09-10 00:32:52.139270 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:32:52.139280 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:32:52.139291 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:32:52.139302 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:32:52.139312 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:32:52.139323 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:32:52.139334 | orchestrator | 2025-09-10 00:32:52.139345 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-10 00:32:52.139356 | orchestrator | Wednesday 10 September 2025 00:32:50 +0000 (0:00:00.800) 0:07:07.920 *** 2025-09-10 00:32:52.139367 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:52.139377 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:52.139388 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:52.139399 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:52.139409 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:52.139420 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:52.139431 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:52.139442 | orchestrator | 2025-09-10 00:32:52.139452 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-10 00:32:52.139463 | orchestrator | Wednesday 10 September 2025 00:32:51 +0000 (0:00:00.971) 0:07:08.892 *** 2025-09-10 00:32:52.139474 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:32:52.139484 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:32:52.139495 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:32:52.139506 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:32:52.139516 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:32:52.139555 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:32:52.139566 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:32:52.139576 | orchestrator | 2025-09-10 00:32:52.139594 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-10 00:33:24.391072 | orchestrator | Wednesday 10 September 2025 00:32:52 +0000 (0:00:00.501) 0:07:09.393 *** 2025-09-10 00:33:24.391222 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.391250 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.391271 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.391291 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.391310 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.391329 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.391350 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.391371 | orchestrator | 2025-09-10 00:33:24.391393 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-10 00:33:24.391414 | orchestrator | Wednesday 10 September 2025 00:32:52 +0000 (0:00:00.487) 0:07:09.880 *** 2025-09-10 00:33:24.391434 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.391455 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.391475 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.391495 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.391515 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.391581 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.391606 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.391628 | orchestrator | 2025-09-10 00:33:24.391651 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-10 00:33:24.391673 | orchestrator | Wednesday 10 September 2025 00:32:53 +0000 (0:00:00.511) 0:07:10.391 *** 2025-09-10 00:33:24.391734 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.391758 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.391778 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.391799 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.391818 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.391837 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.391855 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.391876 | orchestrator | 2025-09-10 00:33:24.391897 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-10 00:33:24.391915 | orchestrator | Wednesday 10 September 2025 00:32:53 +0000 (0:00:00.478) 0:07:10.870 *** 2025-09-10 00:33:24.391932 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.391949 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.391965 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.391982 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.391999 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.392015 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.392031 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.392047 | orchestrator | 2025-09-10 00:33:24.392065 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-10 00:33:24.392082 | orchestrator | Wednesday 10 September 2025 00:32:59 +0000 (0:00:05.673) 0:07:16.544 *** 2025-09-10 00:33:24.392119 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:33:24.392140 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:33:24.392158 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:33:24.392176 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:33:24.392195 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:33:24.392215 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:33:24.392234 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:33:24.392253 | orchestrator | 2025-09-10 00:33:24.392272 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-10 00:33:24.392290 | orchestrator | Wednesday 10 September 2025 00:32:59 +0000 (0:00:00.530) 0:07:17.075 *** 2025-09-10 00:33:24.392311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:33:24.392332 | orchestrator | 2025-09-10 00:33:24.392352 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-10 00:33:24.392369 | orchestrator | Wednesday 10 September 2025 00:33:00 +0000 (0:00:00.802) 0:07:17.877 *** 2025-09-10 00:33:24.392387 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.392402 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.392413 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.392423 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.392434 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.392445 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.392455 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.392466 | orchestrator | 2025-09-10 00:33:24.392477 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-10 00:33:24.392487 | orchestrator | Wednesday 10 September 2025 00:33:02 +0000 (0:00:02.028) 0:07:19.905 *** 2025-09-10 00:33:24.392498 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.392509 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.392519 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.392568 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.392580 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.392591 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.392601 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.392612 | orchestrator | 2025-09-10 00:33:24.392623 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-10 00:33:24.392634 | orchestrator | Wednesday 10 September 2025 00:33:03 +0000 (0:00:01.132) 0:07:21.038 *** 2025-09-10 00:33:24.392645 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.392655 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.392680 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.392691 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.392701 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.392712 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.392722 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.392733 | orchestrator | 2025-09-10 00:33:24.392744 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-10 00:33:24.392755 | orchestrator | Wednesday 10 September 2025 00:33:04 +0000 (0:00:00.801) 0:07:21.840 *** 2025-09-10 00:33:24.392766 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392778 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392789 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392823 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392835 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392846 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392857 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-10 00:33:24.392868 | orchestrator | 2025-09-10 00:33:24.392879 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-10 00:33:24.392890 | orchestrator | Wednesday 10 September 2025 00:33:06 +0000 (0:00:01.698) 0:07:23.538 *** 2025-09-10 00:33:24.392902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:33:24.392913 | orchestrator | 2025-09-10 00:33:24.392924 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-10 00:33:24.392934 | orchestrator | Wednesday 10 September 2025 00:33:07 +0000 (0:00:01.021) 0:07:24.559 *** 2025-09-10 00:33:24.392945 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:24.392956 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:24.392966 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:24.392977 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:24.392988 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:24.392998 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:24.393009 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:24.393019 | orchestrator | 2025-09-10 00:33:24.393030 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-10 00:33:24.393041 | orchestrator | Wednesday 10 September 2025 00:33:16 +0000 (0:00:09.206) 0:07:33.766 *** 2025-09-10 00:33:24.393051 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.393070 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.393081 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.393092 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.393102 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.393113 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.393124 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.393134 | orchestrator | 2025-09-10 00:33:24.393145 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-10 00:33:24.393156 | orchestrator | Wednesday 10 September 2025 00:33:18 +0000 (0:00:01.877) 0:07:35.644 *** 2025-09-10 00:33:24.393167 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.393178 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.393195 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.393206 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.393216 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.393227 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.393237 | orchestrator | 2025-09-10 00:33:24.393248 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-10 00:33:24.393259 | orchestrator | Wednesday 10 September 2025 00:33:19 +0000 (0:00:01.307) 0:07:36.951 *** 2025-09-10 00:33:24.393270 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:24.393281 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:24.393291 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:24.393302 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:24.393313 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:24.393323 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:24.393334 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:24.393344 | orchestrator | 2025-09-10 00:33:24.393355 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-10 00:33:24.393366 | orchestrator | 2025-09-10 00:33:24.393376 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-10 00:33:24.393387 | orchestrator | Wednesday 10 September 2025 00:33:20 +0000 (0:00:01.265) 0:07:38.216 *** 2025-09-10 00:33:24.393398 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:33:24.393408 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:33:24.393419 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:33:24.393430 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:33:24.393440 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:33:24.393451 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:33:24.393461 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:33:24.393472 | orchestrator | 2025-09-10 00:33:24.393483 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-10 00:33:24.393493 | orchestrator | 2025-09-10 00:33:24.393504 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-10 00:33:24.393515 | orchestrator | Wednesday 10 September 2025 00:33:21 +0000 (0:00:00.532) 0:07:38.749 *** 2025-09-10 00:33:24.393550 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:24.393563 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:24.393573 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:24.393584 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:24.393594 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:24.393605 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:24.393615 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:24.393626 | orchestrator | 2025-09-10 00:33:24.393636 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-10 00:33:24.393647 | orchestrator | Wednesday 10 September 2025 00:33:22 +0000 (0:00:01.312) 0:07:40.061 *** 2025-09-10 00:33:24.393658 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:24.393668 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:24.393679 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:24.393689 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:24.393700 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:24.393710 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:24.393721 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:24.393731 | orchestrator | 2025-09-10 00:33:24.393742 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-10 00:33:24.393760 | orchestrator | Wednesday 10 September 2025 00:33:24 +0000 (0:00:01.579) 0:07:41.640 *** 2025-09-10 00:33:48.380051 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:33:48.380178 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:33:48.380196 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:33:48.380208 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:33:48.380220 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:33:48.380231 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:33:48.380242 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:33:48.380254 | orchestrator | 2025-09-10 00:33:48.380296 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-10 00:33:48.380310 | orchestrator | Wednesday 10 September 2025 00:33:24 +0000 (0:00:00.495) 0:07:42.136 *** 2025-09-10 00:33:48.380321 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:33:48.380333 | orchestrator | 2025-09-10 00:33:48.380344 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-10 00:33:48.380355 | orchestrator | Wednesday 10 September 2025 00:33:25 +0000 (0:00:01.030) 0:07:43.167 *** 2025-09-10 00:33:48.380367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:33:48.380380 | orchestrator | 2025-09-10 00:33:48.380391 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-10 00:33:48.380401 | orchestrator | Wednesday 10 September 2025 00:33:26 +0000 (0:00:00.828) 0:07:43.995 *** 2025-09-10 00:33:48.380412 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.380423 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.380433 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.380444 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.380454 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.380465 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.380475 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.380486 | orchestrator | 2025-09-10 00:33:48.380496 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-10 00:33:48.380507 | orchestrator | Wednesday 10 September 2025 00:33:35 +0000 (0:00:08.447) 0:07:52.442 *** 2025-09-10 00:33:48.380518 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.380557 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.380569 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.380582 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.380594 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.380606 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.380618 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.380630 | orchestrator | 2025-09-10 00:33:48.380642 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-10 00:33:48.380654 | orchestrator | Wednesday 10 September 2025 00:33:36 +0000 (0:00:00.828) 0:07:53.271 *** 2025-09-10 00:33:48.380667 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.380679 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.380692 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.380704 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.380716 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.380728 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.380740 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.380752 | orchestrator | 2025-09-10 00:33:48.380765 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-10 00:33:48.380778 | orchestrator | Wednesday 10 September 2025 00:33:37 +0000 (0:00:01.559) 0:07:54.831 *** 2025-09-10 00:33:48.380789 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.380801 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.380813 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.380825 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.380837 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.380849 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.380862 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.380874 | orchestrator | 2025-09-10 00:33:48.380887 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-10 00:33:48.380899 | orchestrator | Wednesday 10 September 2025 00:33:39 +0000 (0:00:02.313) 0:07:57.145 *** 2025-09-10 00:33:48.380911 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.380933 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.380944 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.380954 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.380965 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.380975 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.380986 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.380996 | orchestrator | 2025-09-10 00:33:48.381007 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-10 00:33:48.381017 | orchestrator | Wednesday 10 September 2025 00:33:41 +0000 (0:00:01.151) 0:07:58.296 *** 2025-09-10 00:33:48.381028 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.381038 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.381049 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.381059 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.381070 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.381080 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.381090 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.381101 | orchestrator | 2025-09-10 00:33:48.381112 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-10 00:33:48.381122 | orchestrator | 2025-09-10 00:33:48.381133 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-10 00:33:48.381191 | orchestrator | Wednesday 10 September 2025 00:33:42 +0000 (0:00:01.393) 0:07:59.690 *** 2025-09-10 00:33:48.381204 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:33:48.381216 | orchestrator | 2025-09-10 00:33:48.381226 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-10 00:33:48.381255 | orchestrator | Wednesday 10 September 2025 00:33:43 +0000 (0:00:00.821) 0:08:00.512 *** 2025-09-10 00:33:48.381267 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:48.381278 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:48.381288 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:48.381299 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:48.381310 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:48.381320 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:48.381330 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:48.381341 | orchestrator | 2025-09-10 00:33:48.381351 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-10 00:33:48.381362 | orchestrator | Wednesday 10 September 2025 00:33:44 +0000 (0:00:00.803) 0:08:01.315 *** 2025-09-10 00:33:48.381373 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.381383 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.381394 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.381405 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.381415 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.381425 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.381436 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.381446 | orchestrator | 2025-09-10 00:33:48.381457 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-10 00:33:48.381467 | orchestrator | Wednesday 10 September 2025 00:33:45 +0000 (0:00:01.287) 0:08:02.602 *** 2025-09-10 00:33:48.381478 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:33:48.381489 | orchestrator | 2025-09-10 00:33:48.381500 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-10 00:33:48.381510 | orchestrator | Wednesday 10 September 2025 00:33:46 +0000 (0:00:00.846) 0:08:03.448 *** 2025-09-10 00:33:48.381521 | orchestrator | ok: [testbed-manager] 2025-09-10 00:33:48.381564 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:33:48.381576 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:33:48.381586 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:33:48.381597 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:33:48.381616 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:33:48.381626 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:33:48.381637 | orchestrator | 2025-09-10 00:33:48.381648 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-10 00:33:48.381659 | orchestrator | Wednesday 10 September 2025 00:33:46 +0000 (0:00:00.807) 0:08:04.256 *** 2025-09-10 00:33:48.381669 | orchestrator | changed: [testbed-manager] 2025-09-10 00:33:48.381686 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:33:48.381697 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:33:48.381708 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:33:48.381718 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:33:48.381729 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:33:48.381740 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:33:48.381750 | orchestrator | 2025-09-10 00:33:48.381761 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:33:48.381773 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-10 00:33:48.381785 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-10 00:33:48.381796 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-10 00:33:48.381807 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-10 00:33:48.381818 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-10 00:33:48.381828 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-10 00:33:48.381839 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-10 00:33:48.381850 | orchestrator | 2025-09-10 00:33:48.381861 | orchestrator | 2025-09-10 00:33:48.381872 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:33:48.381883 | orchestrator | Wednesday 10 September 2025 00:33:48 +0000 (0:00:01.360) 0:08:05.617 *** 2025-09-10 00:33:48.381894 | orchestrator | =============================================================================== 2025-09-10 00:33:48.381905 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.66s 2025-09-10 00:33:48.381915 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.38s 2025-09-10 00:33:48.381926 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.46s 2025-09-10 00:33:48.381937 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.20s 2025-09-10 00:33:48.381947 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.35s 2025-09-10 00:33:48.381958 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.42s 2025-09-10 00:33:48.381970 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.15s 2025-09-10 00:33:48.381980 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.15s 2025-09-10 00:33:48.381991 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.21s 2025-09-10 00:33:48.382002 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.69s 2025-09-10 00:33:48.382068 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.45s 2025-09-10 00:33:48.839895 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.16s 2025-09-10 00:33:48.840006 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.10s 2025-09-10 00:33:48.840048 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.09s 2025-09-10 00:33:48.840061 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.02s 2025-09-10 00:33:48.840072 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.73s 2025-09-10 00:33:48.840083 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.40s 2025-09-10 00:33:48.840094 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.94s 2025-09-10 00:33:48.840104 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.67s 2025-09-10 00:33:48.840115 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.58s 2025-09-10 00:33:49.129768 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-10 00:33:49.129861 | orchestrator | + osism apply network 2025-09-10 00:34:01.680869 | orchestrator | 2025-09-10 00:34:01 | INFO  | Task 4807acc9-9699-466d-bc5d-59e6a8dac7f3 (network) was prepared for execution. 2025-09-10 00:34:01.680967 | orchestrator | 2025-09-10 00:34:01 | INFO  | It takes a moment until task 4807acc9-9699-466d-bc5d-59e6a8dac7f3 (network) has been started and output is visible here. 2025-09-10 00:34:30.485186 | orchestrator | 2025-09-10 00:34:30.485344 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-10 00:34:30.485366 | orchestrator | 2025-09-10 00:34:30.485386 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-10 00:34:30.485405 | orchestrator | Wednesday 10 September 2025 00:34:05 +0000 (0:00:00.295) 0:00:00.295 *** 2025-09-10 00:34:30.485423 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.485444 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.485463 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.485480 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.485502 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.485527 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.485584 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.485602 | orchestrator | 2025-09-10 00:34:30.485620 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-10 00:34:30.485638 | orchestrator | Wednesday 10 September 2025 00:34:06 +0000 (0:00:00.739) 0:00:01.035 *** 2025-09-10 00:34:30.485659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:34:30.485680 | orchestrator | 2025-09-10 00:34:30.485694 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-10 00:34:30.485707 | orchestrator | Wednesday 10 September 2025 00:34:07 +0000 (0:00:01.259) 0:00:02.294 *** 2025-09-10 00:34:30.485719 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.485732 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.485744 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.485756 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.485768 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.485780 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.485792 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.485805 | orchestrator | 2025-09-10 00:34:30.485818 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-10 00:34:30.485830 | orchestrator | Wednesday 10 September 2025 00:34:09 +0000 (0:00:02.057) 0:00:04.352 *** 2025-09-10 00:34:30.485843 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.485855 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.485867 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.485879 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.485891 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.485903 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.485915 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.485927 | orchestrator | 2025-09-10 00:34:30.485939 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-10 00:34:30.485975 | orchestrator | Wednesday 10 September 2025 00:34:11 +0000 (0:00:01.818) 0:00:06.171 *** 2025-09-10 00:34:30.485988 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-10 00:34:30.486001 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-10 00:34:30.486013 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-10 00:34:30.486087 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-10 00:34:30.486097 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-10 00:34:30.486144 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-10 00:34:30.486157 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-10 00:34:30.486168 | orchestrator | 2025-09-10 00:34:30.486179 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-10 00:34:30.486190 | orchestrator | Wednesday 10 September 2025 00:34:12 +0000 (0:00:00.999) 0:00:07.170 *** 2025-09-10 00:34:30.486201 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:34:30.486213 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-10 00:34:30.486224 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 00:34:30.486234 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-10 00:34:30.486245 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-10 00:34:30.486256 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-10 00:34:30.486267 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-10 00:34:30.486278 | orchestrator | 2025-09-10 00:34:30.486288 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-10 00:34:30.486299 | orchestrator | Wednesday 10 September 2025 00:34:16 +0000 (0:00:03.438) 0:00:10.609 *** 2025-09-10 00:34:30.486310 | orchestrator | changed: [testbed-manager] 2025-09-10 00:34:30.486321 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:34:30.486331 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:34:30.486342 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:34:30.486353 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:34:30.486363 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:34:30.486374 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:34:30.486385 | orchestrator | 2025-09-10 00:34:30.486395 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-10 00:34:30.486406 | orchestrator | Wednesday 10 September 2025 00:34:17 +0000 (0:00:01.412) 0:00:12.021 *** 2025-09-10 00:34:30.486417 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:34:30.486428 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 00:34:30.486438 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-10 00:34:30.486449 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-10 00:34:30.486460 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-10 00:34:30.486470 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-10 00:34:30.486481 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-10 00:34:30.486491 | orchestrator | 2025-09-10 00:34:30.486502 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-10 00:34:30.486513 | orchestrator | Wednesday 10 September 2025 00:34:19 +0000 (0:00:01.863) 0:00:13.885 *** 2025-09-10 00:34:30.486524 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.486535 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.486566 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.486577 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.486588 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.486599 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.486609 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.486620 | orchestrator | 2025-09-10 00:34:30.486631 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-10 00:34:30.486666 | orchestrator | Wednesday 10 September 2025 00:34:20 +0000 (0:00:01.105) 0:00:14.991 *** 2025-09-10 00:34:30.486678 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:34:30.486689 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:34:30.486699 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:34:30.486721 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:34:30.486732 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:34:30.486743 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:34:30.486754 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:34:30.486765 | orchestrator | 2025-09-10 00:34:30.486776 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-10 00:34:30.486801 | orchestrator | Wednesday 10 September 2025 00:34:21 +0000 (0:00:00.648) 0:00:15.639 *** 2025-09-10 00:34:30.486813 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.486824 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.486834 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.486845 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.486856 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.486867 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.486877 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.486888 | orchestrator | 2025-09-10 00:34:30.486899 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-10 00:34:30.486910 | orchestrator | Wednesday 10 September 2025 00:34:23 +0000 (0:00:02.193) 0:00:17.832 *** 2025-09-10 00:34:30.486921 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:34:30.486931 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:34:30.486942 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:34:30.486953 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:34:30.486964 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:34:30.486974 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:34:30.486986 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-10 00:34:30.486998 | orchestrator | 2025-09-10 00:34:30.487009 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-10 00:34:30.487020 | orchestrator | Wednesday 10 September 2025 00:34:24 +0000 (0:00:00.920) 0:00:18.753 *** 2025-09-10 00:34:30.487031 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.487042 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:34:30.487052 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:34:30.487063 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:34:30.487073 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:34:30.487084 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:34:30.487094 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:34:30.487105 | orchestrator | 2025-09-10 00:34:30.487116 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-10 00:34:30.487127 | orchestrator | Wednesday 10 September 2025 00:34:26 +0000 (0:00:01.641) 0:00:20.395 *** 2025-09-10 00:34:30.487138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:34:30.487151 | orchestrator | 2025-09-10 00:34:30.487162 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-10 00:34:30.487173 | orchestrator | Wednesday 10 September 2025 00:34:27 +0000 (0:00:01.302) 0:00:21.697 *** 2025-09-10 00:34:30.487184 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.487195 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.487205 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.487216 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.487227 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.487237 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.487248 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.487258 | orchestrator | 2025-09-10 00:34:30.487269 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-10 00:34:30.487280 | orchestrator | Wednesday 10 September 2025 00:34:28 +0000 (0:00:00.978) 0:00:22.675 *** 2025-09-10 00:34:30.487290 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:30.487301 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:30.487312 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:30.487331 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:30.487342 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:30.487352 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:30.487363 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:30.487373 | orchestrator | 2025-09-10 00:34:30.487384 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-10 00:34:30.487395 | orchestrator | Wednesday 10 September 2025 00:34:29 +0000 (0:00:00.920) 0:00:23.596 *** 2025-09-10 00:34:30.487406 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487417 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487428 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487439 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487449 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487460 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487471 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487481 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487492 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487502 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-10 00:34:30.487513 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487524 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487535 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487575 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-10 00:34:30.487586 | orchestrator | 2025-09-10 00:34:30.487604 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-10 00:34:46.864960 | orchestrator | Wednesday 10 September 2025 00:34:30 +0000 (0:00:01.246) 0:00:24.842 *** 2025-09-10 00:34:46.865081 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:34:46.865098 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:34:46.865110 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:34:46.865121 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:34:46.865132 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:34:46.865142 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:34:46.865154 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:34:46.865165 | orchestrator | 2025-09-10 00:34:46.865193 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-10 00:34:46.865205 | orchestrator | Wednesday 10 September 2025 00:34:31 +0000 (0:00:00.651) 0:00:25.494 *** 2025-09-10 00:34:46.865219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-5, testbed-node-4, testbed-node-2, testbed-node-3 2025-09-10 00:34:46.865233 | orchestrator | 2025-09-10 00:34:46.865244 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-10 00:34:46.865256 | orchestrator | Wednesday 10 September 2025 00:34:35 +0000 (0:00:04.768) 0:00:30.262 *** 2025-09-10 00:34:46.865268 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865326 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865483 | orchestrator | 2025-09-10 00:34:46.865496 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-10 00:34:46.865509 | orchestrator | Wednesday 10 September 2025 00:34:41 +0000 (0:00:05.865) 0:00:36.128 *** 2025-09-10 00:34:46.865521 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-10 00:34:46.865648 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:46.865719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:53.107902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-10 00:34:53.108014 | orchestrator | 2025-09-10 00:34:53.108031 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-10 00:34:53.108044 | orchestrator | Wednesday 10 September 2025 00:34:46 +0000 (0:00:05.090) 0:00:41.219 *** 2025-09-10 00:34:53.108078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:34:53.108091 | orchestrator | 2025-09-10 00:34:53.108102 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-10 00:34:53.108113 | orchestrator | Wednesday 10 September 2025 00:34:48 +0000 (0:00:01.299) 0:00:42.519 *** 2025-09-10 00:34:53.108124 | orchestrator | ok: [testbed-manager] 2025-09-10 00:34:53.108136 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:34:53.108147 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:34:53.108158 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:34:53.108168 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:34:53.108179 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:34:53.108190 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:34:53.108202 | orchestrator | 2025-09-10 00:34:53.108213 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-10 00:34:53.108224 | orchestrator | Wednesday 10 September 2025 00:34:49 +0000 (0:00:01.155) 0:00:43.674 *** 2025-09-10 00:34:53.108235 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108246 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108257 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108268 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108279 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108307 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108319 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108330 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108341 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:34:53.108353 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108364 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108375 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108386 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:34:53.108397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108407 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108418 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108429 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108439 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108450 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:34:53.108461 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108471 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108482 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108493 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108504 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:34:53.108514 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108525 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108536 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108596 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108608 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:34:53.108619 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:34:53.108630 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-10 00:34:53.108641 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-10 00:34:53.108652 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-10 00:34:53.108663 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-10 00:34:53.108673 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:34:53.108684 | orchestrator | 2025-09-10 00:34:53.108695 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-10 00:34:53.108725 | orchestrator | Wednesday 10 September 2025 00:34:51 +0000 (0:00:02.015) 0:00:45.689 *** 2025-09-10 00:34:53.108736 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:34:53.108747 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:34:53.108758 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:34:53.108769 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:34:53.108780 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:34:53.108790 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:34:53.108806 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:34:53.108817 | orchestrator | 2025-09-10 00:34:53.108828 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-10 00:34:53.108839 | orchestrator | Wednesday 10 September 2025 00:34:51 +0000 (0:00:00.655) 0:00:46.345 *** 2025-09-10 00:34:53.108850 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:34:53.108860 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:34:53.108871 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:34:53.108882 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:34:53.108892 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:34:53.108903 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:34:53.108914 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:34:53.108924 | orchestrator | 2025-09-10 00:34:53.108935 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:34:53.108947 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 00:34:53.108960 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:34:53.108971 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:34:53.108982 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:34:53.108993 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:34:53.109004 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:34:53.109014 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 00:34:53.109025 | orchestrator | 2025-09-10 00:34:53.109036 | orchestrator | 2025-09-10 00:34:53.109047 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:34:53.109058 | orchestrator | Wednesday 10 September 2025 00:34:52 +0000 (0:00:00.728) 0:00:47.073 *** 2025-09-10 00:34:53.109069 | orchestrator | =============================================================================== 2025-09-10 00:34:53.109086 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.87s 2025-09-10 00:34:53.109097 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.09s 2025-09-10 00:34:53.109108 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.77s 2025-09-10 00:34:53.109119 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.44s 2025-09-10 00:34:53.109129 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.19s 2025-09-10 00:34:53.109140 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.06s 2025-09-10 00:34:53.109151 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-09-10 00:34:53.109162 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.86s 2025-09-10 00:34:53.109172 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2025-09-10 00:34:53.109183 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2025-09-10 00:34:53.109194 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.41s 2025-09-10 00:34:53.109205 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2025-09-10 00:34:53.109215 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.30s 2025-09-10 00:34:53.109226 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2025-09-10 00:34:53.109236 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2025-09-10 00:34:53.109247 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-09-10 00:34:53.109258 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-09-10 00:34:53.109268 | orchestrator | osism.commons.network : Create required directories --------------------- 1.00s 2025-09-10 00:34:53.109279 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-09-10 00:34:53.109290 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.92s 2025-09-10 00:34:53.400877 | orchestrator | + osism apply wireguard 2025-09-10 00:35:05.437516 | orchestrator | 2025-09-10 00:35:05 | INFO  | Task 9124aaf6-92bc-4d1c-ba58-9c1dc12b3443 (wireguard) was prepared for execution. 2025-09-10 00:35:05.437587 | orchestrator | 2025-09-10 00:35:05 | INFO  | It takes a moment until task 9124aaf6-92bc-4d1c-ba58-9c1dc12b3443 (wireguard) has been started and output is visible here. 2025-09-10 00:35:25.568965 | orchestrator | 2025-09-10 00:35:25.569054 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-10 00:35:25.569063 | orchestrator | 2025-09-10 00:35:25.569069 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-10 00:35:25.569090 | orchestrator | Wednesday 10 September 2025 00:35:09 +0000 (0:00:00.236) 0:00:00.236 *** 2025-09-10 00:35:25.569095 | orchestrator | ok: [testbed-manager] 2025-09-10 00:35:25.569101 | orchestrator | 2025-09-10 00:35:25.569106 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-10 00:35:25.569112 | orchestrator | Wednesday 10 September 2025 00:35:11 +0000 (0:00:01.556) 0:00:01.792 *** 2025-09-10 00:35:25.569117 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569123 | orchestrator | 2025-09-10 00:35:25.569128 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-10 00:35:25.569134 | orchestrator | Wednesday 10 September 2025 00:35:17 +0000 (0:00:06.789) 0:00:08.582 *** 2025-09-10 00:35:25.569139 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569144 | orchestrator | 2025-09-10 00:35:25.569149 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-10 00:35:25.569154 | orchestrator | Wednesday 10 September 2025 00:35:18 +0000 (0:00:00.578) 0:00:09.161 *** 2025-09-10 00:35:25.569159 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569178 | orchestrator | 2025-09-10 00:35:25.569184 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-10 00:35:25.569190 | orchestrator | Wednesday 10 September 2025 00:35:18 +0000 (0:00:00.438) 0:00:09.599 *** 2025-09-10 00:35:25.569195 | orchestrator | ok: [testbed-manager] 2025-09-10 00:35:25.569200 | orchestrator | 2025-09-10 00:35:25.569205 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-10 00:35:25.569210 | orchestrator | Wednesday 10 September 2025 00:35:19 +0000 (0:00:00.539) 0:00:10.139 *** 2025-09-10 00:35:25.569215 | orchestrator | ok: [testbed-manager] 2025-09-10 00:35:25.569220 | orchestrator | 2025-09-10 00:35:25.569225 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-10 00:35:25.569230 | orchestrator | Wednesday 10 September 2025 00:35:19 +0000 (0:00:00.523) 0:00:10.662 *** 2025-09-10 00:35:25.569235 | orchestrator | ok: [testbed-manager] 2025-09-10 00:35:25.569240 | orchestrator | 2025-09-10 00:35:25.569245 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-10 00:35:25.569251 | orchestrator | Wednesday 10 September 2025 00:35:20 +0000 (0:00:00.427) 0:00:11.090 *** 2025-09-10 00:35:25.569256 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569260 | orchestrator | 2025-09-10 00:35:25.569266 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-10 00:35:25.569271 | orchestrator | Wednesday 10 September 2025 00:35:21 +0000 (0:00:01.266) 0:00:12.357 *** 2025-09-10 00:35:25.569276 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-10 00:35:25.569281 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569286 | orchestrator | 2025-09-10 00:35:25.569291 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-10 00:35:25.569296 | orchestrator | Wednesday 10 September 2025 00:35:22 +0000 (0:00:00.880) 0:00:13.237 *** 2025-09-10 00:35:25.569301 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569306 | orchestrator | 2025-09-10 00:35:25.569311 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-10 00:35:25.569316 | orchestrator | Wednesday 10 September 2025 00:35:24 +0000 (0:00:01.693) 0:00:14.930 *** 2025-09-10 00:35:25.569321 | orchestrator | changed: [testbed-manager] 2025-09-10 00:35:25.569326 | orchestrator | 2025-09-10 00:35:25.569331 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:35:25.569337 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:35:25.569343 | orchestrator | 2025-09-10 00:35:25.569348 | orchestrator | 2025-09-10 00:35:25.569353 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:35:25.569358 | orchestrator | Wednesday 10 September 2025 00:35:25 +0000 (0:00:00.981) 0:00:15.912 *** 2025-09-10 00:35:25.569363 | orchestrator | =============================================================================== 2025-09-10 00:35:25.569368 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.79s 2025-09-10 00:35:25.569373 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-09-10 00:35:25.569378 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.56s 2025-09-10 00:35:25.569383 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2025-09-10 00:35:25.569388 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-09-10 00:35:25.569393 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-09-10 00:35:25.569399 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-09-10 00:35:25.569404 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-09-10 00:35:25.569409 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-09-10 00:35:25.569414 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-09-10 00:35:25.569423 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-09-10 00:35:25.866511 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-10 00:35:25.904612 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-10 00:35:25.904664 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-10 00:35:25.981586 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 186 0 --:--:-- --:--:-- --:--:-- 189 2025-09-10 00:35:25.992026 | orchestrator | + osism apply --environment custom workarounds 2025-09-10 00:35:27.876637 | orchestrator | 2025-09-10 00:35:27 | INFO  | Trying to run play workarounds in environment custom 2025-09-10 00:35:37.968357 | orchestrator | 2025-09-10 00:35:37 | INFO  | Task 40a87f23-ae0a-4c36-a587-f936323acff8 (workarounds) was prepared for execution. 2025-09-10 00:35:37.968475 | orchestrator | 2025-09-10 00:35:37 | INFO  | It takes a moment until task 40a87f23-ae0a-4c36-a587-f936323acff8 (workarounds) has been started and output is visible here. 2025-09-10 00:36:04.021489 | orchestrator | 2025-09-10 00:36:04.021643 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:36:04.021662 | orchestrator | 2025-09-10 00:36:04.021675 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-10 00:36:04.021687 | orchestrator | Wednesday 10 September 2025 00:35:41 +0000 (0:00:00.151) 0:00:00.151 *** 2025-09-10 00:36:04.021699 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021711 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021722 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021734 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021745 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021756 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021766 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-10 00:36:04.021777 | orchestrator | 2025-09-10 00:36:04.021788 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-10 00:36:04.021799 | orchestrator | 2025-09-10 00:36:04.021810 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-10 00:36:04.021821 | orchestrator | Wednesday 10 September 2025 00:35:42 +0000 (0:00:00.787) 0:00:00.938 *** 2025-09-10 00:36:04.021833 | orchestrator | ok: [testbed-manager] 2025-09-10 00:36:04.021845 | orchestrator | 2025-09-10 00:36:04.021856 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-10 00:36:04.021867 | orchestrator | 2025-09-10 00:36:04.021878 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-10 00:36:04.021890 | orchestrator | Wednesday 10 September 2025 00:35:45 +0000 (0:00:02.335) 0:00:03.274 *** 2025-09-10 00:36:04.021901 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:36:04.021912 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:36:04.021923 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:36:04.021934 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:36:04.021945 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:36:04.021956 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:36:04.021967 | orchestrator | 2025-09-10 00:36:04.021979 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-10 00:36:04.021991 | orchestrator | 2025-09-10 00:36:04.022002 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-10 00:36:04.022013 | orchestrator | Wednesday 10 September 2025 00:35:46 +0000 (0:00:01.870) 0:00:05.144 *** 2025-09-10 00:36:04.022111 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-10 00:36:04.022126 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-10 00:36:04.022160 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-10 00:36:04.022174 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-10 00:36:04.022187 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-10 00:36:04.022200 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-10 00:36:04.022214 | orchestrator | 2025-09-10 00:36:04.022227 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-10 00:36:04.022240 | orchestrator | Wednesday 10 September 2025 00:35:48 +0000 (0:00:01.518) 0:00:06.663 *** 2025-09-10 00:36:04.022254 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:36:04.022268 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:36:04.022281 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:36:04.022294 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:36:04.022307 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:36:04.022320 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:36:04.022333 | orchestrator | 2025-09-10 00:36:04.022346 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-10 00:36:04.022360 | orchestrator | Wednesday 10 September 2025 00:35:52 +0000 (0:00:04.039) 0:00:10.702 *** 2025-09-10 00:36:04.022374 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:36:04.022387 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:36:04.022398 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:36:04.022409 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:36:04.022420 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:36:04.022431 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:36:04.022441 | orchestrator | 2025-09-10 00:36:04.022453 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-10 00:36:04.022463 | orchestrator | 2025-09-10 00:36:04.022475 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-10 00:36:04.022486 | orchestrator | Wednesday 10 September 2025 00:35:53 +0000 (0:00:00.718) 0:00:11.421 *** 2025-09-10 00:36:04.022497 | orchestrator | changed: [testbed-manager] 2025-09-10 00:36:04.022507 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:36:04.022518 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:36:04.022529 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:36:04.022540 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:36:04.022550 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:36:04.022582 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:36:04.022593 | orchestrator | 2025-09-10 00:36:04.022604 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-10 00:36:04.022615 | orchestrator | Wednesday 10 September 2025 00:35:54 +0000 (0:00:01.720) 0:00:13.142 *** 2025-09-10 00:36:04.022635 | orchestrator | changed: [testbed-manager] 2025-09-10 00:36:04.022647 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:36:04.022658 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:36:04.022669 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:36:04.022679 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:36:04.022690 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:36:04.022718 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:36:04.022730 | orchestrator | 2025-09-10 00:36:04.022741 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-10 00:36:04.022753 | orchestrator | Wednesday 10 September 2025 00:35:56 +0000 (0:00:01.652) 0:00:14.794 *** 2025-09-10 00:36:04.022764 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:36:04.022775 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:36:04.022786 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:36:04.022797 | orchestrator | ok: [testbed-manager] 2025-09-10 00:36:04.022808 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:36:04.022819 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:36:04.022836 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:36:04.022847 | orchestrator | 2025-09-10 00:36:04.022857 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-10 00:36:04.022868 | orchestrator | Wednesday 10 September 2025 00:35:58 +0000 (0:00:01.625) 0:00:16.420 *** 2025-09-10 00:36:04.022880 | orchestrator | changed: [testbed-manager] 2025-09-10 00:36:04.022890 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:36:04.022901 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:36:04.022912 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:36:04.022923 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:36:04.022934 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:36:04.022945 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:36:04.022956 | orchestrator | 2025-09-10 00:36:04.022967 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-10 00:36:04.022978 | orchestrator | Wednesday 10 September 2025 00:36:00 +0000 (0:00:02.020) 0:00:18.440 *** 2025-09-10 00:36:04.022989 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:36:04.023000 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:36:04.023010 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:36:04.023021 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:36:04.023032 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:36:04.023043 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:36:04.023053 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:36:04.023064 | orchestrator | 2025-09-10 00:36:04.023075 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-10 00:36:04.023086 | orchestrator | 2025-09-10 00:36:04.023097 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-10 00:36:04.023109 | orchestrator | Wednesday 10 September 2025 00:36:00 +0000 (0:00:00.609) 0:00:19.050 *** 2025-09-10 00:36:04.023120 | orchestrator | ok: [testbed-manager] 2025-09-10 00:36:04.023131 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:36:04.023141 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:36:04.023152 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:36:04.023163 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:36:04.023174 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:36:04.023185 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:36:04.023196 | orchestrator | 2025-09-10 00:36:04.023207 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:36:04.023219 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:36:04.023231 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:04.023242 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:04.023253 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:04.023264 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:04.023275 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:04.023286 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:04.023297 | orchestrator | 2025-09-10 00:36:04.023308 | orchestrator | 2025-09-10 00:36:04.023319 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:36:04.023330 | orchestrator | Wednesday 10 September 2025 00:36:03 +0000 (0:00:03.085) 0:00:22.135 *** 2025-09-10 00:36:04.023347 | orchestrator | =============================================================================== 2025-09-10 00:36:04.023358 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.04s 2025-09-10 00:36:04.023369 | orchestrator | Install python3-docker -------------------------------------------------- 3.09s 2025-09-10 00:36:04.023379 | orchestrator | Apply netplan configuration --------------------------------------------- 2.34s 2025-09-10 00:36:04.023390 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.02s 2025-09-10 00:36:04.023401 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2025-09-10 00:36:04.023412 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-09-10 00:36:04.023423 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2025-09-10 00:36:04.023434 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2025-09-10 00:36:04.023448 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2025-09-10 00:36:04.023460 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-09-10 00:36:04.023471 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2025-09-10 00:36:04.023488 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-09-10 00:36:04.683343 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-10 00:36:16.755476 | orchestrator | 2025-09-10 00:36:16 | INFO  | Task cc29b6fc-ee4c-4076-909a-32d4335a5a24 (reboot) was prepared for execution. 2025-09-10 00:36:16.755617 | orchestrator | 2025-09-10 00:36:16 | INFO  | It takes a moment until task cc29b6fc-ee4c-4076-909a-32d4335a5a24 (reboot) has been started and output is visible here. 2025-09-10 00:36:26.840652 | orchestrator | 2025-09-10 00:36:26.840772 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-10 00:36:26.840790 | orchestrator | 2025-09-10 00:36:26.840802 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-10 00:36:26.840814 | orchestrator | Wednesday 10 September 2025 00:36:20 +0000 (0:00:00.212) 0:00:00.212 *** 2025-09-10 00:36:26.840826 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:36:26.840838 | orchestrator | 2025-09-10 00:36:26.840849 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-10 00:36:26.840860 | orchestrator | Wednesday 10 September 2025 00:36:20 +0000 (0:00:00.112) 0:00:00.325 *** 2025-09-10 00:36:26.840871 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:36:26.840883 | orchestrator | 2025-09-10 00:36:26.840894 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-10 00:36:26.840905 | orchestrator | Wednesday 10 September 2025 00:36:21 +0000 (0:00:00.996) 0:00:01.322 *** 2025-09-10 00:36:26.840916 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:36:26.840926 | orchestrator | 2025-09-10 00:36:26.840938 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-10 00:36:26.840949 | orchestrator | 2025-09-10 00:36:26.840960 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-10 00:36:26.840971 | orchestrator | Wednesday 10 September 2025 00:36:21 +0000 (0:00:00.111) 0:00:01.434 *** 2025-09-10 00:36:26.840982 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:36:26.840993 | orchestrator | 2025-09-10 00:36:26.841004 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-10 00:36:26.841015 | orchestrator | Wednesday 10 September 2025 00:36:22 +0000 (0:00:00.096) 0:00:01.530 *** 2025-09-10 00:36:26.841025 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:36:26.841036 | orchestrator | 2025-09-10 00:36:26.841047 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-10 00:36:26.841058 | orchestrator | Wednesday 10 September 2025 00:36:22 +0000 (0:00:00.673) 0:00:02.203 *** 2025-09-10 00:36:26.841069 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:36:26.841080 | orchestrator | 2025-09-10 00:36:26.841117 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-10 00:36:26.841131 | orchestrator | 2025-09-10 00:36:26.841144 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-10 00:36:26.841156 | orchestrator | Wednesday 10 September 2025 00:36:22 +0000 (0:00:00.123) 0:00:02.327 *** 2025-09-10 00:36:26.841168 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:36:26.841181 | orchestrator | 2025-09-10 00:36:26.841194 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-10 00:36:26.841205 | orchestrator | Wednesday 10 September 2025 00:36:23 +0000 (0:00:00.218) 0:00:02.546 *** 2025-09-10 00:36:26.841218 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:36:26.841230 | orchestrator | 2025-09-10 00:36:26.841243 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-10 00:36:26.841255 | orchestrator | Wednesday 10 September 2025 00:36:23 +0000 (0:00:00.671) 0:00:03.217 *** 2025-09-10 00:36:26.841268 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:36:26.841280 | orchestrator | 2025-09-10 00:36:26.841292 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-10 00:36:26.841304 | orchestrator | 2025-09-10 00:36:26.841316 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-10 00:36:26.841328 | orchestrator | Wednesday 10 September 2025 00:36:23 +0000 (0:00:00.131) 0:00:03.349 *** 2025-09-10 00:36:26.841341 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:36:26.841354 | orchestrator | 2025-09-10 00:36:26.841366 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-10 00:36:26.841379 | orchestrator | Wednesday 10 September 2025 00:36:23 +0000 (0:00:00.114) 0:00:03.463 *** 2025-09-10 00:36:26.841391 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:36:26.841403 | orchestrator | 2025-09-10 00:36:26.841416 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-10 00:36:26.841428 | orchestrator | Wednesday 10 September 2025 00:36:24 +0000 (0:00:00.648) 0:00:04.112 *** 2025-09-10 00:36:26.841442 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:36:26.841463 | orchestrator | 2025-09-10 00:36:26.841483 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-10 00:36:26.841501 | orchestrator | 2025-09-10 00:36:26.841523 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-10 00:36:26.841544 | orchestrator | Wednesday 10 September 2025 00:36:24 +0000 (0:00:00.123) 0:00:04.236 *** 2025-09-10 00:36:26.841588 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:36:26.841602 | orchestrator | 2025-09-10 00:36:26.841613 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-10 00:36:26.841624 | orchestrator | Wednesday 10 September 2025 00:36:24 +0000 (0:00:00.102) 0:00:04.338 *** 2025-09-10 00:36:26.841635 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:36:26.841646 | orchestrator | 2025-09-10 00:36:26.841657 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-10 00:36:26.841668 | orchestrator | Wednesday 10 September 2025 00:36:25 +0000 (0:00:00.691) 0:00:05.030 *** 2025-09-10 00:36:26.841679 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:36:26.841689 | orchestrator | 2025-09-10 00:36:26.841701 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-10 00:36:26.841711 | orchestrator | 2025-09-10 00:36:26.841722 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-10 00:36:26.841733 | orchestrator | Wednesday 10 September 2025 00:36:25 +0000 (0:00:00.131) 0:00:05.162 *** 2025-09-10 00:36:26.841744 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:36:26.841754 | orchestrator | 2025-09-10 00:36:26.841765 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-10 00:36:26.841776 | orchestrator | Wednesday 10 September 2025 00:36:25 +0000 (0:00:00.104) 0:00:05.266 *** 2025-09-10 00:36:26.841787 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:36:26.841798 | orchestrator | 2025-09-10 00:36:26.841809 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-10 00:36:26.841830 | orchestrator | Wednesday 10 September 2025 00:36:26 +0000 (0:00:00.664) 0:00:05.931 *** 2025-09-10 00:36:26.841860 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:36:26.841871 | orchestrator | 2025-09-10 00:36:26.841883 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:36:26.841895 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:26.841907 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:26.841918 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:26.841929 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:26.841939 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:26.841950 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:36:26.841961 | orchestrator | 2025-09-10 00:36:26.841972 | orchestrator | 2025-09-10 00:36:26.841983 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:36:26.841994 | orchestrator | Wednesday 10 September 2025 00:36:26 +0000 (0:00:00.042) 0:00:05.974 *** 2025-09-10 00:36:26.842004 | orchestrator | =============================================================================== 2025-09-10 00:36:26.842071 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.35s 2025-09-10 00:36:26.842088 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-09-10 00:36:26.842099 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2025-09-10 00:36:27.197825 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-10 00:36:39.308737 | orchestrator | 2025-09-10 00:36:39 | INFO  | Task 3790a488-e017-40fd-9060-4824b305528c (wait-for-connection) was prepared for execution. 2025-09-10 00:36:39.308857 | orchestrator | 2025-09-10 00:36:39 | INFO  | It takes a moment until task 3790a488-e017-40fd-9060-4824b305528c (wait-for-connection) has been started and output is visible here. 2025-09-10 00:36:55.292736 | orchestrator | 2025-09-10 00:36:55.292854 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-10 00:36:55.292872 | orchestrator | 2025-09-10 00:36:55.292884 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-10 00:36:55.292895 | orchestrator | Wednesday 10 September 2025 00:36:43 +0000 (0:00:00.238) 0:00:00.238 *** 2025-09-10 00:36:55.292907 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:36:55.292918 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:36:55.292930 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:36:55.292941 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:36:55.292952 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:36:55.292962 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:36:55.292973 | orchestrator | 2025-09-10 00:36:55.292985 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:36:55.292997 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:36:55.293010 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:36:55.293021 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:36:55.293056 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:36:55.293085 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:36:55.293097 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:36:55.293108 | orchestrator | 2025-09-10 00:36:55.293119 | orchestrator | 2025-09-10 00:36:55.293130 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:36:55.293145 | orchestrator | Wednesday 10 September 2025 00:36:54 +0000 (0:00:11.566) 0:00:11.805 *** 2025-09-10 00:36:55.293156 | orchestrator | =============================================================================== 2025-09-10 00:36:55.293167 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2025-09-10 00:36:55.589855 | orchestrator | + osism apply hddtemp 2025-09-10 00:37:07.567714 | orchestrator | 2025-09-10 00:37:07 | INFO  | Task ff0e11cc-8633-407d-9ba4-36eaf41da3f3 (hddtemp) was prepared for execution. 2025-09-10 00:37:07.567832 | orchestrator | 2025-09-10 00:37:07 | INFO  | It takes a moment until task ff0e11cc-8633-407d-9ba4-36eaf41da3f3 (hddtemp) has been started and output is visible here. 2025-09-10 00:37:35.657550 | orchestrator | 2025-09-10 00:37:35.657720 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-10 00:37:35.657739 | orchestrator | 2025-09-10 00:37:35.657752 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-10 00:37:35.657763 | orchestrator | Wednesday 10 September 2025 00:37:11 +0000 (0:00:00.297) 0:00:00.297 *** 2025-09-10 00:37:35.657775 | orchestrator | ok: [testbed-manager] 2025-09-10 00:37:35.657787 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:37:35.657798 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:37:35.657809 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:37:35.657820 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:37:35.657830 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:37:35.657841 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:37:35.657852 | orchestrator | 2025-09-10 00:37:35.657863 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-10 00:37:35.657874 | orchestrator | Wednesday 10 September 2025 00:37:12 +0000 (0:00:00.681) 0:00:00.979 *** 2025-09-10 00:37:35.657888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:37:35.657902 | orchestrator | 2025-09-10 00:37:35.657913 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-10 00:37:35.657924 | orchestrator | Wednesday 10 September 2025 00:37:13 +0000 (0:00:01.176) 0:00:02.155 *** 2025-09-10 00:37:35.657935 | orchestrator | ok: [testbed-manager] 2025-09-10 00:37:35.657946 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:37:35.657957 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:37:35.657968 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:37:35.657979 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:37:35.657990 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:37:35.658000 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:37:35.658011 | orchestrator | 2025-09-10 00:37:35.658079 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-10 00:37:35.658094 | orchestrator | Wednesday 10 September 2025 00:37:15 +0000 (0:00:02.073) 0:00:04.229 *** 2025-09-10 00:37:35.658107 | orchestrator | changed: [testbed-manager] 2025-09-10 00:37:35.658120 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:37:35.658133 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:37:35.658145 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:37:35.658158 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:37:35.658193 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:37:35.658206 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:37:35.658218 | orchestrator | 2025-09-10 00:37:35.658231 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-10 00:37:35.658243 | orchestrator | Wednesday 10 September 2025 00:37:16 +0000 (0:00:01.088) 0:00:05.318 *** 2025-09-10 00:37:35.658256 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:37:35.658268 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:37:35.658280 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:37:35.658292 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:37:35.658304 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:37:35.658317 | orchestrator | ok: [testbed-manager] 2025-09-10 00:37:35.658329 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:37:35.658342 | orchestrator | 2025-09-10 00:37:35.658354 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-10 00:37:35.658367 | orchestrator | Wednesday 10 September 2025 00:37:18 +0000 (0:00:02.247) 0:00:07.566 *** 2025-09-10 00:37:35.658380 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:37:35.658392 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:37:35.658404 | orchestrator | changed: [testbed-manager] 2025-09-10 00:37:35.658417 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:37:35.658429 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:37:35.658442 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:37:35.658454 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:37:35.658465 | orchestrator | 2025-09-10 00:37:35.658476 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-10 00:37:35.658487 | orchestrator | Wednesday 10 September 2025 00:37:19 +0000 (0:00:00.806) 0:00:08.372 *** 2025-09-10 00:37:35.658498 | orchestrator | changed: [testbed-manager] 2025-09-10 00:37:35.658508 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:37:35.658519 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:37:35.658529 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:37:35.658540 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:37:35.658550 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:37:35.658561 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:37:35.658594 | orchestrator | 2025-09-10 00:37:35.658605 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-10 00:37:35.658616 | orchestrator | Wednesday 10 September 2025 00:37:32 +0000 (0:00:12.339) 0:00:20.712 *** 2025-09-10 00:37:35.658627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:37:35.658638 | orchestrator | 2025-09-10 00:37:35.658649 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-10 00:37:35.658660 | orchestrator | Wednesday 10 September 2025 00:37:33 +0000 (0:00:01.391) 0:00:22.103 *** 2025-09-10 00:37:35.658670 | orchestrator | changed: [testbed-manager] 2025-09-10 00:37:35.658696 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:37:35.658707 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:37:35.658718 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:37:35.658728 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:37:35.658739 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:37:35.658750 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:37:35.658760 | orchestrator | 2025-09-10 00:37:35.658771 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:37:35.658782 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:37:35.658814 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:37:35.658826 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:37:35.658846 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:37:35.658858 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:37:35.658868 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:37:35.658879 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:37:35.658890 | orchestrator | 2025-09-10 00:37:35.658901 | orchestrator | 2025-09-10 00:37:35.658912 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:37:35.658923 | orchestrator | Wednesday 10 September 2025 00:37:35 +0000 (0:00:01.837) 0:00:23.940 *** 2025-09-10 00:37:35.658933 | orchestrator | =============================================================================== 2025-09-10 00:37:35.658944 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.34s 2025-09-10 00:37:35.658955 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.25s 2025-09-10 00:37:35.658966 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.07s 2025-09-10 00:37:35.658976 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-09-10 00:37:35.658987 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.39s 2025-09-10 00:37:35.658998 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-09-10 00:37:35.659009 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.09s 2025-09-10 00:37:35.659019 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2025-09-10 00:37:35.659030 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-09-10 00:37:35.943754 | orchestrator | ++ semver latest 7.1.1 2025-09-10 00:37:36.010500 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-10 00:37:36.010551 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-10 00:37:36.010557 | orchestrator | + sudo systemctl restart manager.service 2025-09-10 00:37:49.287019 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-10 00:37:49.287118 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-10 00:37:49.287133 | orchestrator | + local max_attempts=60 2025-09-10 00:37:49.287146 | orchestrator | + local name=ceph-ansible 2025-09-10 00:37:49.287157 | orchestrator | + local attempt_num=1 2025-09-10 00:37:49.287169 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:37:49.320745 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:37:49.320772 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:37:49.320784 | orchestrator | + sleep 5 2025-09-10 00:37:54.323533 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:37:54.361826 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:37:54.361894 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:37:54.361906 | orchestrator | + sleep 5 2025-09-10 00:37:59.365042 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:37:59.400397 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:37:59.400470 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:37:59.400490 | orchestrator | + sleep 5 2025-09-10 00:38:04.406129 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:04.446011 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:04.446122 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:04.446134 | orchestrator | + sleep 5 2025-09-10 00:38:09.452805 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:09.501534 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:09.501667 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:09.501683 | orchestrator | + sleep 5 2025-09-10 00:38:14.507037 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:14.547723 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:14.547786 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:14.547800 | orchestrator | + sleep 5 2025-09-10 00:38:19.553086 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:19.590165 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:19.590235 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:19.590250 | orchestrator | + sleep 5 2025-09-10 00:38:24.593235 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:24.642715 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:24.642755 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:24.642767 | orchestrator | + sleep 5 2025-09-10 00:38:29.647491 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:29.697872 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:29.697942 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:29.697956 | orchestrator | + sleep 5 2025-09-10 00:38:34.702935 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:34.743436 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:34.743544 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:34.743560 | orchestrator | + sleep 5 2025-09-10 00:38:39.748183 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:39.783474 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:39.783523 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:39.783536 | orchestrator | + sleep 5 2025-09-10 00:38:44.787764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:44.826180 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:44.826229 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:44.826238 | orchestrator | + sleep 5 2025-09-10 00:38:49.830416 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:49.867356 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:49.867395 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-10 00:38:49.867407 | orchestrator | + sleep 5 2025-09-10 00:38:54.871787 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-10 00:38:54.904464 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:54.904533 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-10 00:38:54.904548 | orchestrator | + local max_attempts=60 2025-09-10 00:38:54.904562 | orchestrator | + local name=kolla-ansible 2025-09-10 00:38:54.904574 | orchestrator | + local attempt_num=1 2025-09-10 00:38:54.905185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-10 00:38:54.947553 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:54.947637 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-10 00:38:54.947651 | orchestrator | + local max_attempts=60 2025-09-10 00:38:54.947664 | orchestrator | + local name=osism-ansible 2025-09-10 00:38:54.947676 | orchestrator | + local attempt_num=1 2025-09-10 00:38:54.948521 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-10 00:38:54.986985 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-10 00:38:54.987040 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-10 00:38:54.987053 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-10 00:38:55.151512 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-10 00:38:55.282510 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-10 00:38:55.426682 | orchestrator | ARA in osism-ansible already disabled. 2025-09-10 00:38:55.580570 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-10 00:38:55.581411 | orchestrator | + osism apply gather-facts 2025-09-10 00:39:07.708110 | orchestrator | 2025-09-10 00:39:07 | INFO  | Task 9d0b741b-d287-437c-ba2b-40d141c8f43e (gather-facts) was prepared for execution. 2025-09-10 00:39:07.708222 | orchestrator | 2025-09-10 00:39:07 | INFO  | It takes a moment until task 9d0b741b-d287-437c-ba2b-40d141c8f43e (gather-facts) has been started and output is visible here. 2025-09-10 00:39:21.363258 | orchestrator | 2025-09-10 00:39:21.363371 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-10 00:39:21.363417 | orchestrator | 2025-09-10 00:39:21.363430 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-10 00:39:21.363442 | orchestrator | Wednesday 10 September 2025 00:39:11 +0000 (0:00:00.241) 0:00:00.241 *** 2025-09-10 00:39:21.363453 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:39:21.363464 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:39:21.363475 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:39:21.363485 | orchestrator | ok: [testbed-manager] 2025-09-10 00:39:21.363496 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:39:21.363506 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:39:21.363517 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:39:21.363528 | orchestrator | 2025-09-10 00:39:21.363539 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-10 00:39:21.363549 | orchestrator | 2025-09-10 00:39:21.363560 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-10 00:39:21.363571 | orchestrator | Wednesday 10 September 2025 00:39:20 +0000 (0:00:08.501) 0:00:08.742 *** 2025-09-10 00:39:21.363629 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:39:21.363643 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:39:21.363653 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:39:21.363664 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:39:21.363674 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:39:21.363685 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:39:21.363696 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:39:21.363707 | orchestrator | 2025-09-10 00:39:21.363718 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:39:21.363729 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363741 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363752 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363763 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363773 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363784 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363796 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:39:21.363809 | orchestrator | 2025-09-10 00:39:21.363821 | orchestrator | 2025-09-10 00:39:21.363833 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:39:21.363846 | orchestrator | Wednesday 10 September 2025 00:39:20 +0000 (0:00:00.603) 0:00:09.346 *** 2025-09-10 00:39:21.363859 | orchestrator | =============================================================================== 2025-09-10 00:39:21.363887 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.50s 2025-09-10 00:39:21.363900 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-09-10 00:39:21.674233 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-10 00:39:21.693792 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-10 00:39:21.707623 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-10 00:39:21.719848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-10 00:39:21.730854 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-10 00:39:21.750228 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-10 00:39:21.769309 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-10 00:39:21.783612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-10 00:39:21.800164 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-10 00:39:21.814398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-10 00:39:21.832942 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-10 00:39:21.846160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-10 00:39:21.863414 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-10 00:39:21.881184 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-10 00:39:21.903743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-10 00:39:21.923991 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-10 00:39:21.945516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-10 00:39:21.968206 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-10 00:39:21.988126 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-10 00:39:22.009867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-10 00:39:22.033767 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-10 00:39:22.214105 | orchestrator | ok: Runtime: 0:23:40.645272 2025-09-10 00:39:22.342363 | 2025-09-10 00:39:22.342537 | TASK [Deploy services] 2025-09-10 00:39:22.876343 | orchestrator | skipping: Conditional result was False 2025-09-10 00:39:22.893384 | 2025-09-10 00:39:22.893548 | TASK [Deploy in a nutshell] 2025-09-10 00:39:23.562613 | orchestrator | + set -e 2025-09-10 00:39:23.564278 | orchestrator | 2025-09-10 00:39:23.564315 | orchestrator | # PULL IMAGES 2025-09-10 00:39:23.564327 | orchestrator | 2025-09-10 00:39:23.564344 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-10 00:39:23.564363 | orchestrator | ++ export INTERACTIVE=false 2025-09-10 00:39:23.564376 | orchestrator | ++ INTERACTIVE=false 2025-09-10 00:39:23.564416 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-10 00:39:23.564436 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-10 00:39:23.564450 | orchestrator | + source /opt/manager-vars.sh 2025-09-10 00:39:23.564460 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-10 00:39:23.564477 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-10 00:39:23.564487 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-10 00:39:23.564503 | orchestrator | ++ CEPH_VERSION=reef 2025-09-10 00:39:23.564513 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-10 00:39:23.564529 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-10 00:39:23.564539 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-10 00:39:23.564552 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-10 00:39:23.564562 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-10 00:39:23.564572 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-10 00:39:23.564630 | orchestrator | ++ export ARA=false 2025-09-10 00:39:23.564643 | orchestrator | ++ ARA=false 2025-09-10 00:39:23.564652 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-10 00:39:23.564662 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-10 00:39:23.564671 | orchestrator | ++ export TEMPEST=true 2025-09-10 00:39:23.564681 | orchestrator | ++ TEMPEST=true 2025-09-10 00:39:23.564690 | orchestrator | ++ export IS_ZUUL=true 2025-09-10 00:39:23.564699 | orchestrator | ++ IS_ZUUL=true 2025-09-10 00:39:23.564709 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:39:23.564719 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.31 2025-09-10 00:39:23.564728 | orchestrator | ++ export EXTERNAL_API=false 2025-09-10 00:39:23.564737 | orchestrator | ++ EXTERNAL_API=false 2025-09-10 00:39:23.564747 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-10 00:39:23.564757 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-10 00:39:23.564767 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-10 00:39:23.564776 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-10 00:39:23.564786 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-10 00:39:23.564795 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-10 00:39:23.564805 | orchestrator | + echo 2025-09-10 00:39:23.564814 | orchestrator | + echo '# PULL IMAGES' 2025-09-10 00:39:23.564824 | orchestrator | + echo 2025-09-10 00:39:23.564840 | orchestrator | ++ semver latest 7.0.0 2025-09-10 00:39:23.632797 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-10 00:39:23.632888 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-10 00:39:23.632902 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-10 00:39:25.587362 | orchestrator | 2025-09-10 00:39:25 | INFO  | Trying to run play pull-images in environment custom 2025-09-10 00:39:35.671344 | orchestrator | 2025-09-10 00:39:35 | INFO  | Task 3bf19e0b-f756-443c-a6df-23b3811118ec (pull-images) was prepared for execution. 2025-09-10 00:39:35.671453 | orchestrator | 2025-09-10 00:39:35 | INFO  | Task 3bf19e0b-f756-443c-a6df-23b3811118ec is running in background. No more output. Check ARA for logs. 2025-09-10 00:39:38.002734 | orchestrator | 2025-09-10 00:39:38 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-10 00:39:48.103778 | orchestrator | 2025-09-10 00:39:48 | INFO  | Task 0cdc3035-23e2-4362-bcc6-549383322a24 (wipe-partitions) was prepared for execution. 2025-09-10 00:39:48.103954 | orchestrator | 2025-09-10 00:39:48 | INFO  | It takes a moment until task 0cdc3035-23e2-4362-bcc6-549383322a24 (wipe-partitions) has been started and output is visible here. 2025-09-10 00:40:00.842115 | orchestrator | 2025-09-10 00:40:00.842227 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-10 00:40:00.842244 | orchestrator | 2025-09-10 00:40:00.842257 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-10 00:40:00.842276 | orchestrator | Wednesday 10 September 2025 00:39:52 +0000 (0:00:00.212) 0:00:00.212 *** 2025-09-10 00:40:00.842288 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:40:00.842300 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:40:00.842312 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:40:00.842324 | orchestrator | 2025-09-10 00:40:00.842335 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-10 00:40:00.842372 | orchestrator | Wednesday 10 September 2025 00:39:53 +0000 (0:00:00.599) 0:00:00.811 *** 2025-09-10 00:40:00.842384 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:00.842395 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:00.842411 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:40:00.842422 | orchestrator | 2025-09-10 00:40:00.842433 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-10 00:40:00.842444 | orchestrator | Wednesday 10 September 2025 00:39:53 +0000 (0:00:00.288) 0:00:01.099 *** 2025-09-10 00:40:00.842455 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:00.842467 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:40:00.842477 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:40:00.842488 | orchestrator | 2025-09-10 00:40:00.842499 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-10 00:40:00.842510 | orchestrator | Wednesday 10 September 2025 00:39:54 +0000 (0:00:00.682) 0:00:01.782 *** 2025-09-10 00:40:00.842521 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:00.842532 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:00.842542 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:40:00.842553 | orchestrator | 2025-09-10 00:40:00.842564 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-10 00:40:00.842575 | orchestrator | Wednesday 10 September 2025 00:39:54 +0000 (0:00:00.250) 0:00:02.033 *** 2025-09-10 00:40:00.842649 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-10 00:40:00.842669 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-10 00:40:00.842683 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-10 00:40:00.842695 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-10 00:40:00.842709 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-10 00:40:00.842721 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-10 00:40:00.842734 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-10 00:40:00.842746 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-10 00:40:00.842758 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-10 00:40:00.842771 | orchestrator | 2025-09-10 00:40:00.842783 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-10 00:40:00.842797 | orchestrator | Wednesday 10 September 2025 00:39:55 +0000 (0:00:01.193) 0:00:03.227 *** 2025-09-10 00:40:00.842810 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-10 00:40:00.842823 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-10 00:40:00.842835 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-10 00:40:00.842847 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-10 00:40:00.842859 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-10 00:40:00.842872 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-10 00:40:00.842884 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-10 00:40:00.842897 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-10 00:40:00.842909 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-10 00:40:00.842922 | orchestrator | 2025-09-10 00:40:00.842934 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-10 00:40:00.842948 | orchestrator | Wednesday 10 September 2025 00:39:56 +0000 (0:00:01.333) 0:00:04.560 *** 2025-09-10 00:40:00.842960 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-10 00:40:00.842970 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-10 00:40:00.842981 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-10 00:40:00.842992 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-10 00:40:00.843002 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-10 00:40:00.843013 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-10 00:40:00.843024 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-10 00:40:00.843043 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-10 00:40:00.843061 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-10 00:40:00.843072 | orchestrator | 2025-09-10 00:40:00.843083 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-10 00:40:00.843094 | orchestrator | Wednesday 10 September 2025 00:39:59 +0000 (0:00:02.224) 0:00:06.784 *** 2025-09-10 00:40:00.843105 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:40:00.843116 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:40:00.843127 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:40:00.843137 | orchestrator | 2025-09-10 00:40:00.843148 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-10 00:40:00.843159 | orchestrator | Wednesday 10 September 2025 00:39:59 +0000 (0:00:00.582) 0:00:07.367 *** 2025-09-10 00:40:00.843170 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:40:00.843180 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:40:00.843191 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:40:00.843202 | orchestrator | 2025-09-10 00:40:00.843212 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:40:00.843227 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:00.843240 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:00.843269 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:00.843281 | orchestrator | 2025-09-10 00:40:00.843292 | orchestrator | 2025-09-10 00:40:00.843302 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:40:00.843313 | orchestrator | Wednesday 10 September 2025 00:40:00 +0000 (0:00:00.641) 0:00:08.009 *** 2025-09-10 00:40:00.843324 | orchestrator | =============================================================================== 2025-09-10 00:40:00.843334 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.22s 2025-09-10 00:40:00.843345 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-09-10 00:40:00.843356 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-09-10 00:40:00.843367 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.68s 2025-09-10 00:40:00.843377 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-09-10 00:40:00.843388 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-09-10 00:40:00.843399 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-09-10 00:40:00.843410 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2025-09-10 00:40:00.843420 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-10 00:40:13.091442 | orchestrator | 2025-09-10 00:40:13 | INFO  | Task 445771fe-6a5a-4b2c-9fd4-9a664a868e27 (facts) was prepared for execution. 2025-09-10 00:40:13.091530 | orchestrator | 2025-09-10 00:40:13 | INFO  | It takes a moment until task 445771fe-6a5a-4b2c-9fd4-9a664a868e27 (facts) has been started and output is visible here. 2025-09-10 00:40:25.298722 | orchestrator | 2025-09-10 00:40:25.298820 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-10 00:40:25.298836 | orchestrator | 2025-09-10 00:40:25.298849 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-10 00:40:25.298860 | orchestrator | Wednesday 10 September 2025 00:40:17 +0000 (0:00:00.279) 0:00:00.279 *** 2025-09-10 00:40:25.298871 | orchestrator | ok: [testbed-manager] 2025-09-10 00:40:25.298882 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:40:25.298893 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:40:25.298923 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:40:25.298934 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:25.298944 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:40:25.298955 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:40:25.298965 | orchestrator | 2025-09-10 00:40:25.298976 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-10 00:40:25.298986 | orchestrator | Wednesday 10 September 2025 00:40:18 +0000 (0:00:01.076) 0:00:01.356 *** 2025-09-10 00:40:25.298997 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:40:25.299008 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:40:25.299018 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:40:25.299029 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:40:25.299039 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:25.299049 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:25.299060 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:40:25.299070 | orchestrator | 2025-09-10 00:40:25.299081 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-10 00:40:25.299091 | orchestrator | 2025-09-10 00:40:25.299116 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-10 00:40:25.299127 | orchestrator | Wednesday 10 September 2025 00:40:19 +0000 (0:00:01.230) 0:00:02.587 *** 2025-09-10 00:40:25.299138 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:40:25.299148 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:40:25.299160 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:40:25.299170 | orchestrator | ok: [testbed-manager] 2025-09-10 00:40:25.299181 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:25.299191 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:40:25.299202 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:40:25.299212 | orchestrator | 2025-09-10 00:40:25.299223 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-10 00:40:25.299236 | orchestrator | 2025-09-10 00:40:25.299249 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-10 00:40:25.299262 | orchestrator | Wednesday 10 September 2025 00:40:24 +0000 (0:00:04.607) 0:00:07.194 *** 2025-09-10 00:40:25.299274 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:40:25.299287 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:40:25.299299 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:40:25.299311 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:40:25.299323 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:25.299335 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:25.299348 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:40:25.299360 | orchestrator | 2025-09-10 00:40:25.299372 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:40:25.299385 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299398 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299410 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299423 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299435 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299447 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299460 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:40:25.299473 | orchestrator | 2025-09-10 00:40:25.299494 | orchestrator | 2025-09-10 00:40:25.299506 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:40:25.299519 | orchestrator | Wednesday 10 September 2025 00:40:24 +0000 (0:00:00.716) 0:00:07.911 *** 2025-09-10 00:40:25.299531 | orchestrator | =============================================================================== 2025-09-10 00:40:25.299544 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2025-09-10 00:40:25.299556 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-09-10 00:40:25.299568 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-10 00:40:25.299581 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-09-10 00:40:27.582171 | orchestrator | 2025-09-10 00:40:27 | INFO  | Task 7ca8edd8-2e08-4d5c-9d7b-63f3f054f452 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-10 00:40:27.582256 | orchestrator | 2025-09-10 00:40:27 | INFO  | It takes a moment until task 7ca8edd8-2e08-4d5c-9d7b-63f3f054f452 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-10 00:40:39.608747 | orchestrator | 2025-09-10 00:40:39.608863 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-10 00:40:39.608879 | orchestrator | 2025-09-10 00:40:39.608891 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-10 00:40:39.608903 | orchestrator | Wednesday 10 September 2025 00:40:31 +0000 (0:00:00.319) 0:00:00.319 *** 2025-09-10 00:40:39.608914 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-10 00:40:39.608925 | orchestrator | 2025-09-10 00:40:39.608936 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-10 00:40:39.608947 | orchestrator | Wednesday 10 September 2025 00:40:32 +0000 (0:00:00.245) 0:00:00.564 *** 2025-09-10 00:40:39.608958 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:39.608970 | orchestrator | 2025-09-10 00:40:39.608981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.608991 | orchestrator | Wednesday 10 September 2025 00:40:32 +0000 (0:00:00.221) 0:00:00.786 *** 2025-09-10 00:40:39.609002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-10 00:40:39.609014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-10 00:40:39.609025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-10 00:40:39.609046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-10 00:40:39.609058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-10 00:40:39.609068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-10 00:40:39.609079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-10 00:40:39.609090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-10 00:40:39.609101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-10 00:40:39.609111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-10 00:40:39.609122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-10 00:40:39.609132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-10 00:40:39.609143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-10 00:40:39.609153 | orchestrator | 2025-09-10 00:40:39.609164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609174 | orchestrator | Wednesday 10 September 2025 00:40:32 +0000 (0:00:00.350) 0:00:01.136 *** 2025-09-10 00:40:39.609185 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609218 | orchestrator | 2025-09-10 00:40:39.609232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609245 | orchestrator | Wednesday 10 September 2025 00:40:33 +0000 (0:00:00.506) 0:00:01.643 *** 2025-09-10 00:40:39.609257 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609270 | orchestrator | 2025-09-10 00:40:39.609282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609295 | orchestrator | Wednesday 10 September 2025 00:40:33 +0000 (0:00:00.184) 0:00:01.827 *** 2025-09-10 00:40:39.609307 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609320 | orchestrator | 2025-09-10 00:40:39.609332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609345 | orchestrator | Wednesday 10 September 2025 00:40:33 +0000 (0:00:00.197) 0:00:02.025 *** 2025-09-10 00:40:39.609358 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609376 | orchestrator | 2025-09-10 00:40:39.609388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609401 | orchestrator | Wednesday 10 September 2025 00:40:33 +0000 (0:00:00.192) 0:00:02.218 *** 2025-09-10 00:40:39.609413 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609426 | orchestrator | 2025-09-10 00:40:39.609439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609451 | orchestrator | Wednesday 10 September 2025 00:40:33 +0000 (0:00:00.198) 0:00:02.417 *** 2025-09-10 00:40:39.609464 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609476 | orchestrator | 2025-09-10 00:40:39.609489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609502 | orchestrator | Wednesday 10 September 2025 00:40:34 +0000 (0:00:00.201) 0:00:02.619 *** 2025-09-10 00:40:39.609515 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609527 | orchestrator | 2025-09-10 00:40:39.609539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609552 | orchestrator | Wednesday 10 September 2025 00:40:34 +0000 (0:00:00.198) 0:00:02.818 *** 2025-09-10 00:40:39.609564 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.609577 | orchestrator | 2025-09-10 00:40:39.609614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609626 | orchestrator | Wednesday 10 September 2025 00:40:34 +0000 (0:00:00.214) 0:00:03.032 *** 2025-09-10 00:40:39.609636 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28) 2025-09-10 00:40:39.609648 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28) 2025-09-10 00:40:39.609659 | orchestrator | 2025-09-10 00:40:39.609669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609680 | orchestrator | Wednesday 10 September 2025 00:40:34 +0000 (0:00:00.409) 0:00:03.442 *** 2025-09-10 00:40:39.609709 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6) 2025-09-10 00:40:39.609721 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6) 2025-09-10 00:40:39.609731 | orchestrator | 2025-09-10 00:40:39.609742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609753 | orchestrator | Wednesday 10 September 2025 00:40:35 +0000 (0:00:00.413) 0:00:03.855 *** 2025-09-10 00:40:39.609769 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd) 2025-09-10 00:40:39.609780 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd) 2025-09-10 00:40:39.609791 | orchestrator | 2025-09-10 00:40:39.609801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609812 | orchestrator | Wednesday 10 September 2025 00:40:36 +0000 (0:00:00.664) 0:00:04.519 *** 2025-09-10 00:40:39.609823 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757) 2025-09-10 00:40:39.609842 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757) 2025-09-10 00:40:39.609853 | orchestrator | 2025-09-10 00:40:39.609863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:39.609874 | orchestrator | Wednesday 10 September 2025 00:40:36 +0000 (0:00:00.708) 0:00:05.228 *** 2025-09-10 00:40:39.609884 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-10 00:40:39.609894 | orchestrator | 2025-09-10 00:40:39.609905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.609916 | orchestrator | Wednesday 10 September 2025 00:40:37 +0000 (0:00:00.818) 0:00:06.046 *** 2025-09-10 00:40:39.609926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-10 00:40:39.609937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-10 00:40:39.609947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-10 00:40:39.609958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-10 00:40:39.609968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-10 00:40:39.609979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-10 00:40:39.609989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-10 00:40:39.609999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-10 00:40:39.610010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-10 00:40:39.610080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-10 00:40:39.610091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-10 00:40:39.610102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-10 00:40:39.610113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-10 00:40:39.610123 | orchestrator | 2025-09-10 00:40:39.610134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610145 | orchestrator | Wednesday 10 September 2025 00:40:37 +0000 (0:00:00.371) 0:00:06.417 *** 2025-09-10 00:40:39.610156 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610166 | orchestrator | 2025-09-10 00:40:39.610177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610188 | orchestrator | Wednesday 10 September 2025 00:40:38 +0000 (0:00:00.204) 0:00:06.622 *** 2025-09-10 00:40:39.610199 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610209 | orchestrator | 2025-09-10 00:40:39.610220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610230 | orchestrator | Wednesday 10 September 2025 00:40:38 +0000 (0:00:00.195) 0:00:06.817 *** 2025-09-10 00:40:39.610241 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610251 | orchestrator | 2025-09-10 00:40:39.610262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610273 | orchestrator | Wednesday 10 September 2025 00:40:38 +0000 (0:00:00.234) 0:00:07.052 *** 2025-09-10 00:40:39.610283 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610294 | orchestrator | 2025-09-10 00:40:39.610304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610315 | orchestrator | Wednesday 10 September 2025 00:40:38 +0000 (0:00:00.190) 0:00:07.243 *** 2025-09-10 00:40:39.610325 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610336 | orchestrator | 2025-09-10 00:40:39.610346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610366 | orchestrator | Wednesday 10 September 2025 00:40:38 +0000 (0:00:00.208) 0:00:07.452 *** 2025-09-10 00:40:39.610376 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610387 | orchestrator | 2025-09-10 00:40:39.610397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610408 | orchestrator | Wednesday 10 September 2025 00:40:39 +0000 (0:00:00.224) 0:00:07.676 *** 2025-09-10 00:40:39.610418 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:39.610429 | orchestrator | 2025-09-10 00:40:39.610439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:39.610450 | orchestrator | Wednesday 10 September 2025 00:40:39 +0000 (0:00:00.197) 0:00:07.873 *** 2025-09-10 00:40:39.610467 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.530856 | orchestrator | 2025-09-10 00:40:47.530964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:47.530983 | orchestrator | Wednesday 10 September 2025 00:40:39 +0000 (0:00:00.215) 0:00:08.088 *** 2025-09-10 00:40:47.530995 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-10 00:40:47.531008 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-10 00:40:47.531019 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-10 00:40:47.531030 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-10 00:40:47.531041 | orchestrator | 2025-09-10 00:40:47.531052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:47.531063 | orchestrator | Wednesday 10 September 2025 00:40:40 +0000 (0:00:01.075) 0:00:09.164 *** 2025-09-10 00:40:47.531091 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531103 | orchestrator | 2025-09-10 00:40:47.531114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:47.531125 | orchestrator | Wednesday 10 September 2025 00:40:40 +0000 (0:00:00.229) 0:00:09.393 *** 2025-09-10 00:40:47.531135 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531146 | orchestrator | 2025-09-10 00:40:47.531157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:47.531168 | orchestrator | Wednesday 10 September 2025 00:40:41 +0000 (0:00:00.194) 0:00:09.588 *** 2025-09-10 00:40:47.531179 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531189 | orchestrator | 2025-09-10 00:40:47.531200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:47.531211 | orchestrator | Wednesday 10 September 2025 00:40:41 +0000 (0:00:00.210) 0:00:09.799 *** 2025-09-10 00:40:47.531222 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531232 | orchestrator | 2025-09-10 00:40:47.531243 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-10 00:40:47.531254 | orchestrator | Wednesday 10 September 2025 00:40:41 +0000 (0:00:00.203) 0:00:10.002 *** 2025-09-10 00:40:47.531265 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-10 00:40:47.531276 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-10 00:40:47.531287 | orchestrator | 2025-09-10 00:40:47.531297 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-10 00:40:47.531308 | orchestrator | Wednesday 10 September 2025 00:40:41 +0000 (0:00:00.164) 0:00:10.167 *** 2025-09-10 00:40:47.531319 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531329 | orchestrator | 2025-09-10 00:40:47.531340 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-10 00:40:47.531351 | orchestrator | Wednesday 10 September 2025 00:40:41 +0000 (0:00:00.134) 0:00:10.301 *** 2025-09-10 00:40:47.531362 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531372 | orchestrator | 2025-09-10 00:40:47.531383 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-10 00:40:47.531396 | orchestrator | Wednesday 10 September 2025 00:40:41 +0000 (0:00:00.131) 0:00:10.433 *** 2025-09-10 00:40:47.531408 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531443 | orchestrator | 2025-09-10 00:40:47.531457 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-10 00:40:47.531469 | orchestrator | Wednesday 10 September 2025 00:40:42 +0000 (0:00:00.140) 0:00:10.574 *** 2025-09-10 00:40:47.531482 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:47.531494 | orchestrator | 2025-09-10 00:40:47.531507 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-10 00:40:47.531519 | orchestrator | Wednesday 10 September 2025 00:40:42 +0000 (0:00:00.129) 0:00:10.704 *** 2025-09-10 00:40:47.531532 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b73e898-cb4c-523f-8aca-971ee560c7ea'}}) 2025-09-10 00:40:47.531545 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2bea83b6-6800-529c-bdd8-a613f3421a6f'}}) 2025-09-10 00:40:47.531557 | orchestrator | 2025-09-10 00:40:47.531568 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-10 00:40:47.531579 | orchestrator | Wednesday 10 September 2025 00:40:42 +0000 (0:00:00.166) 0:00:10.871 *** 2025-09-10 00:40:47.531613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b73e898-cb4c-523f-8aca-971ee560c7ea'}})  2025-09-10 00:40:47.531634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2bea83b6-6800-529c-bdd8-a613f3421a6f'}})  2025-09-10 00:40:47.531645 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531656 | orchestrator | 2025-09-10 00:40:47.531667 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-10 00:40:47.531678 | orchestrator | Wednesday 10 September 2025 00:40:42 +0000 (0:00:00.161) 0:00:11.032 *** 2025-09-10 00:40:47.531688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b73e898-cb4c-523f-8aca-971ee560c7ea'}})  2025-09-10 00:40:47.531699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2bea83b6-6800-529c-bdd8-a613f3421a6f'}})  2025-09-10 00:40:47.531710 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531720 | orchestrator | 2025-09-10 00:40:47.531731 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-10 00:40:47.531741 | orchestrator | Wednesday 10 September 2025 00:40:42 +0000 (0:00:00.385) 0:00:11.418 *** 2025-09-10 00:40:47.531752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b73e898-cb4c-523f-8aca-971ee560c7ea'}})  2025-09-10 00:40:47.531763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2bea83b6-6800-529c-bdd8-a613f3421a6f'}})  2025-09-10 00:40:47.531774 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531784 | orchestrator | 2025-09-10 00:40:47.531811 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-10 00:40:47.531823 | orchestrator | Wednesday 10 September 2025 00:40:43 +0000 (0:00:00.166) 0:00:11.584 *** 2025-09-10 00:40:47.531833 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:47.531844 | orchestrator | 2025-09-10 00:40:47.531854 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-10 00:40:47.531865 | orchestrator | Wednesday 10 September 2025 00:40:43 +0000 (0:00:00.167) 0:00:11.751 *** 2025-09-10 00:40:47.531876 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:40:47.531886 | orchestrator | 2025-09-10 00:40:47.531897 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-10 00:40:47.531907 | orchestrator | Wednesday 10 September 2025 00:40:43 +0000 (0:00:00.150) 0:00:11.902 *** 2025-09-10 00:40:47.531918 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531929 | orchestrator | 2025-09-10 00:40:47.531939 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-10 00:40:47.531950 | orchestrator | Wednesday 10 September 2025 00:40:43 +0000 (0:00:00.139) 0:00:12.042 *** 2025-09-10 00:40:47.531960 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.531971 | orchestrator | 2025-09-10 00:40:47.531991 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-10 00:40:47.532002 | orchestrator | Wednesday 10 September 2025 00:40:43 +0000 (0:00:00.150) 0:00:12.192 *** 2025-09-10 00:40:47.532012 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.532023 | orchestrator | 2025-09-10 00:40:47.532034 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-10 00:40:47.532045 | orchestrator | Wednesday 10 September 2025 00:40:43 +0000 (0:00:00.144) 0:00:12.337 *** 2025-09-10 00:40:47.532055 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 00:40:47.532066 | orchestrator |  "ceph_osd_devices": { 2025-09-10 00:40:47.532077 | orchestrator |  "sdb": { 2025-09-10 00:40:47.532088 | orchestrator |  "osd_lvm_uuid": "4b73e898-cb4c-523f-8aca-971ee560c7ea" 2025-09-10 00:40:47.532098 | orchestrator |  }, 2025-09-10 00:40:47.532109 | orchestrator |  "sdc": { 2025-09-10 00:40:47.532120 | orchestrator |  "osd_lvm_uuid": "2bea83b6-6800-529c-bdd8-a613f3421a6f" 2025-09-10 00:40:47.532130 | orchestrator |  } 2025-09-10 00:40:47.532141 | orchestrator |  } 2025-09-10 00:40:47.532151 | orchestrator | } 2025-09-10 00:40:47.532162 | orchestrator | 2025-09-10 00:40:47.532172 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-10 00:40:47.532183 | orchestrator | Wednesday 10 September 2025 00:40:44 +0000 (0:00:00.159) 0:00:12.496 *** 2025-09-10 00:40:47.532194 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.532204 | orchestrator | 2025-09-10 00:40:47.532215 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-10 00:40:47.532226 | orchestrator | Wednesday 10 September 2025 00:40:44 +0000 (0:00:00.131) 0:00:12.628 *** 2025-09-10 00:40:47.532242 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.532253 | orchestrator | 2025-09-10 00:40:47.532264 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-10 00:40:47.532275 | orchestrator | Wednesday 10 September 2025 00:40:44 +0000 (0:00:00.139) 0:00:12.767 *** 2025-09-10 00:40:47.532285 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:40:47.532296 | orchestrator | 2025-09-10 00:40:47.532306 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-10 00:40:47.532317 | orchestrator | Wednesday 10 September 2025 00:40:44 +0000 (0:00:00.142) 0:00:12.910 *** 2025-09-10 00:40:47.532327 | orchestrator | changed: [testbed-node-3] => { 2025-09-10 00:40:47.532338 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-10 00:40:47.532349 | orchestrator |  "ceph_osd_devices": { 2025-09-10 00:40:47.532360 | orchestrator |  "sdb": { 2025-09-10 00:40:47.532370 | orchestrator |  "osd_lvm_uuid": "4b73e898-cb4c-523f-8aca-971ee560c7ea" 2025-09-10 00:40:47.532381 | orchestrator |  }, 2025-09-10 00:40:47.532392 | orchestrator |  "sdc": { 2025-09-10 00:40:47.532402 | orchestrator |  "osd_lvm_uuid": "2bea83b6-6800-529c-bdd8-a613f3421a6f" 2025-09-10 00:40:47.532413 | orchestrator |  } 2025-09-10 00:40:47.532423 | orchestrator |  }, 2025-09-10 00:40:47.532434 | orchestrator |  "lvm_volumes": [ 2025-09-10 00:40:47.532445 | orchestrator |  { 2025-09-10 00:40:47.532455 | orchestrator |  "data": "osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea", 2025-09-10 00:40:47.532466 | orchestrator |  "data_vg": "ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea" 2025-09-10 00:40:47.532477 | orchestrator |  }, 2025-09-10 00:40:47.532487 | orchestrator |  { 2025-09-10 00:40:47.532498 | orchestrator |  "data": "osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f", 2025-09-10 00:40:47.532508 | orchestrator |  "data_vg": "ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f" 2025-09-10 00:40:47.532519 | orchestrator |  } 2025-09-10 00:40:47.532529 | orchestrator |  ] 2025-09-10 00:40:47.532540 | orchestrator |  } 2025-09-10 00:40:47.532550 | orchestrator | } 2025-09-10 00:40:47.532561 | orchestrator | 2025-09-10 00:40:47.532571 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-10 00:40:47.532625 | orchestrator | Wednesday 10 September 2025 00:40:44 +0000 (0:00:00.256) 0:00:13.167 *** 2025-09-10 00:40:47.532637 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-10 00:40:47.532648 | orchestrator | 2025-09-10 00:40:47.532659 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-10 00:40:47.532669 | orchestrator | 2025-09-10 00:40:47.532680 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-10 00:40:47.532691 | orchestrator | Wednesday 10 September 2025 00:40:47 +0000 (0:00:02.367) 0:00:15.534 *** 2025-09-10 00:40:47.532701 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-10 00:40:47.532712 | orchestrator | 2025-09-10 00:40:47.532722 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-10 00:40:47.532733 | orchestrator | Wednesday 10 September 2025 00:40:47 +0000 (0:00:00.232) 0:00:15.767 *** 2025-09-10 00:40:47.532743 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:40:47.532754 | orchestrator | 2025-09-10 00:40:47.532765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:47.532783 | orchestrator | Wednesday 10 September 2025 00:40:47 +0000 (0:00:00.247) 0:00:16.014 *** 2025-09-10 00:40:55.607572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-10 00:40:55.607720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-10 00:40:55.607737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-10 00:40:55.607749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-10 00:40:55.607760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-10 00:40:55.607771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-10 00:40:55.607781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-10 00:40:55.607792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-10 00:40:55.607803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-10 00:40:55.607814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-10 00:40:55.607842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-10 00:40:55.607854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-10 00:40:55.607865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-10 00:40:55.607881 | orchestrator | 2025-09-10 00:40:55.607893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.607905 | orchestrator | Wednesday 10 September 2025 00:40:47 +0000 (0:00:00.386) 0:00:16.401 *** 2025-09-10 00:40:55.607916 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.607928 | orchestrator | 2025-09-10 00:40:55.607939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.607950 | orchestrator | Wednesday 10 September 2025 00:40:48 +0000 (0:00:00.209) 0:00:16.611 *** 2025-09-10 00:40:55.607961 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.607971 | orchestrator | 2025-09-10 00:40:55.607983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.607993 | orchestrator | Wednesday 10 September 2025 00:40:48 +0000 (0:00:00.196) 0:00:16.807 *** 2025-09-10 00:40:55.608004 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608015 | orchestrator | 2025-09-10 00:40:55.608026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608037 | orchestrator | Wednesday 10 September 2025 00:40:48 +0000 (0:00:00.230) 0:00:17.038 *** 2025-09-10 00:40:55.608047 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608082 | orchestrator | 2025-09-10 00:40:55.608094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608104 | orchestrator | Wednesday 10 September 2025 00:40:48 +0000 (0:00:00.189) 0:00:17.228 *** 2025-09-10 00:40:55.608115 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608128 | orchestrator | 2025-09-10 00:40:55.608141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608153 | orchestrator | Wednesday 10 September 2025 00:40:49 +0000 (0:00:00.642) 0:00:17.871 *** 2025-09-10 00:40:55.608166 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608178 | orchestrator | 2025-09-10 00:40:55.608191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608203 | orchestrator | Wednesday 10 September 2025 00:40:49 +0000 (0:00:00.203) 0:00:18.074 *** 2025-09-10 00:40:55.608216 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608228 | orchestrator | 2025-09-10 00:40:55.608241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608253 | orchestrator | Wednesday 10 September 2025 00:40:49 +0000 (0:00:00.219) 0:00:18.294 *** 2025-09-10 00:40:55.608265 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608277 | orchestrator | 2025-09-10 00:40:55.608289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608302 | orchestrator | Wednesday 10 September 2025 00:40:50 +0000 (0:00:00.205) 0:00:18.500 *** 2025-09-10 00:40:55.608314 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067) 2025-09-10 00:40:55.608327 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067) 2025-09-10 00:40:55.608340 | orchestrator | 2025-09-10 00:40:55.608353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608366 | orchestrator | Wednesday 10 September 2025 00:40:50 +0000 (0:00:00.436) 0:00:18.936 *** 2025-09-10 00:40:55.608379 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00) 2025-09-10 00:40:55.608392 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00) 2025-09-10 00:40:55.608404 | orchestrator | 2025-09-10 00:40:55.608416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608429 | orchestrator | Wednesday 10 September 2025 00:40:50 +0000 (0:00:00.463) 0:00:19.399 *** 2025-09-10 00:40:55.608442 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e) 2025-09-10 00:40:55.608455 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e) 2025-09-10 00:40:55.608465 | orchestrator | 2025-09-10 00:40:55.608476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608487 | orchestrator | Wednesday 10 September 2025 00:40:51 +0000 (0:00:00.495) 0:00:19.895 *** 2025-09-10 00:40:55.608513 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb) 2025-09-10 00:40:55.608525 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb) 2025-09-10 00:40:55.608536 | orchestrator | 2025-09-10 00:40:55.608547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:40:55.608558 | orchestrator | Wednesday 10 September 2025 00:40:51 +0000 (0:00:00.443) 0:00:20.339 *** 2025-09-10 00:40:55.608569 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-10 00:40:55.608580 | orchestrator | 2025-09-10 00:40:55.608609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.608628 | orchestrator | Wednesday 10 September 2025 00:40:52 +0000 (0:00:00.308) 0:00:20.648 *** 2025-09-10 00:40:55.608639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-10 00:40:55.608660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-10 00:40:55.608671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-10 00:40:55.608681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-10 00:40:55.608692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-10 00:40:55.608702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-10 00:40:55.608713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-10 00:40:55.608724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-10 00:40:55.608734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-10 00:40:55.608745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-10 00:40:55.608756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-10 00:40:55.608767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-10 00:40:55.608777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-10 00:40:55.608788 | orchestrator | 2025-09-10 00:40:55.608799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.608810 | orchestrator | Wednesday 10 September 2025 00:40:52 +0000 (0:00:00.400) 0:00:21.049 *** 2025-09-10 00:40:55.608821 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608831 | orchestrator | 2025-09-10 00:40:55.608842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.608853 | orchestrator | Wednesday 10 September 2025 00:40:52 +0000 (0:00:00.201) 0:00:21.250 *** 2025-09-10 00:40:55.608864 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608875 | orchestrator | 2025-09-10 00:40:55.608885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.608896 | orchestrator | Wednesday 10 September 2025 00:40:53 +0000 (0:00:00.677) 0:00:21.928 *** 2025-09-10 00:40:55.608907 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608918 | orchestrator | 2025-09-10 00:40:55.608928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.608939 | orchestrator | Wednesday 10 September 2025 00:40:53 +0000 (0:00:00.220) 0:00:22.148 *** 2025-09-10 00:40:55.608951 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.608961 | orchestrator | 2025-09-10 00:40:55.608972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.608983 | orchestrator | Wednesday 10 September 2025 00:40:53 +0000 (0:00:00.205) 0:00:22.354 *** 2025-09-10 00:40:55.608994 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.609005 | orchestrator | 2025-09-10 00:40:55.609016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.609026 | orchestrator | Wednesday 10 September 2025 00:40:54 +0000 (0:00:00.204) 0:00:22.558 *** 2025-09-10 00:40:55.609037 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.609048 | orchestrator | 2025-09-10 00:40:55.609059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.609070 | orchestrator | Wednesday 10 September 2025 00:40:54 +0000 (0:00:00.216) 0:00:22.775 *** 2025-09-10 00:40:55.609080 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.609091 | orchestrator | 2025-09-10 00:40:55.609102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.609113 | orchestrator | Wednesday 10 September 2025 00:40:54 +0000 (0:00:00.232) 0:00:23.008 *** 2025-09-10 00:40:55.609124 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.609134 | orchestrator | 2025-09-10 00:40:55.609145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.609162 | orchestrator | Wednesday 10 September 2025 00:40:54 +0000 (0:00:00.185) 0:00:23.193 *** 2025-09-10 00:40:55.609173 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-10 00:40:55.609184 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-10 00:40:55.609195 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-10 00:40:55.609206 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-10 00:40:55.609217 | orchestrator | 2025-09-10 00:40:55.609227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:40:55.609238 | orchestrator | Wednesday 10 September 2025 00:40:55 +0000 (0:00:00.710) 0:00:23.904 *** 2025-09-10 00:40:55.609249 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:40:55.609260 | orchestrator | 2025-09-10 00:40:55.609278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:01.049670 | orchestrator | Wednesday 10 September 2025 00:40:55 +0000 (0:00:00.190) 0:00:24.094 *** 2025-09-10 00:41:01.049769 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.049786 | orchestrator | 2025-09-10 00:41:01.049804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:01.049820 | orchestrator | Wednesday 10 September 2025 00:40:55 +0000 (0:00:00.164) 0:00:24.259 *** 2025-09-10 00:41:01.049831 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.049843 | orchestrator | 2025-09-10 00:41:01.049854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:01.049865 | orchestrator | Wednesday 10 September 2025 00:40:55 +0000 (0:00:00.160) 0:00:24.420 *** 2025-09-10 00:41:01.049876 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.049887 | orchestrator | 2025-09-10 00:41:01.049923 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-10 00:41:01.049936 | orchestrator | Wednesday 10 September 2025 00:40:56 +0000 (0:00:00.178) 0:00:24.599 *** 2025-09-10 00:41:01.049947 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-10 00:41:01.049958 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-10 00:41:01.049969 | orchestrator | 2025-09-10 00:41:01.049980 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-10 00:41:01.049990 | orchestrator | Wednesday 10 September 2025 00:40:56 +0000 (0:00:00.291) 0:00:24.890 *** 2025-09-10 00:41:01.050001 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050012 | orchestrator | 2025-09-10 00:41:01.050082 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-10 00:41:01.050094 | orchestrator | Wednesday 10 September 2025 00:40:56 +0000 (0:00:00.109) 0:00:25.000 *** 2025-09-10 00:41:01.050106 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050124 | orchestrator | 2025-09-10 00:41:01.050136 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-10 00:41:01.050147 | orchestrator | Wednesday 10 September 2025 00:40:56 +0000 (0:00:00.114) 0:00:25.114 *** 2025-09-10 00:41:01.050158 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050169 | orchestrator | 2025-09-10 00:41:01.050180 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-10 00:41:01.050193 | orchestrator | Wednesday 10 September 2025 00:40:56 +0000 (0:00:00.110) 0:00:25.225 *** 2025-09-10 00:41:01.050206 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:41:01.050219 | orchestrator | 2025-09-10 00:41:01.050233 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-10 00:41:01.050245 | orchestrator | Wednesday 10 September 2025 00:40:56 +0000 (0:00:00.122) 0:00:25.348 *** 2025-09-10 00:41:01.050258 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20419d67-2a88-5ee6-832e-dd0a34a7687a'}}) 2025-09-10 00:41:01.050272 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '28e77ae9-929e-5c68-8a2a-91f3bea00aca'}}) 2025-09-10 00:41:01.050285 | orchestrator | 2025-09-10 00:41:01.050297 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-10 00:41:01.050330 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.165) 0:00:25.513 *** 2025-09-10 00:41:01.050344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20419d67-2a88-5ee6-832e-dd0a34a7687a'}})  2025-09-10 00:41:01.050358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '28e77ae9-929e-5c68-8a2a-91f3bea00aca'}})  2025-09-10 00:41:01.050371 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050384 | orchestrator | 2025-09-10 00:41:01.050398 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-10 00:41:01.050412 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.123) 0:00:25.636 *** 2025-09-10 00:41:01.050425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20419d67-2a88-5ee6-832e-dd0a34a7687a'}})  2025-09-10 00:41:01.050438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '28e77ae9-929e-5c68-8a2a-91f3bea00aca'}})  2025-09-10 00:41:01.050451 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050464 | orchestrator | 2025-09-10 00:41:01.050477 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-10 00:41:01.050490 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.123) 0:00:25.760 *** 2025-09-10 00:41:01.050502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20419d67-2a88-5ee6-832e-dd0a34a7687a'}})  2025-09-10 00:41:01.050516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '28e77ae9-929e-5c68-8a2a-91f3bea00aca'}})  2025-09-10 00:41:01.050529 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050543 | orchestrator | 2025-09-10 00:41:01.050554 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-10 00:41:01.050564 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.125) 0:00:25.885 *** 2025-09-10 00:41:01.050575 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:41:01.050586 | orchestrator | 2025-09-10 00:41:01.050616 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-10 00:41:01.050627 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.106) 0:00:25.991 *** 2025-09-10 00:41:01.050638 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:41:01.050649 | orchestrator | 2025-09-10 00:41:01.050660 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-10 00:41:01.050671 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.129) 0:00:26.120 *** 2025-09-10 00:41:01.050682 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050693 | orchestrator | 2025-09-10 00:41:01.050722 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-10 00:41:01.050734 | orchestrator | Wednesday 10 September 2025 00:40:57 +0000 (0:00:00.116) 0:00:26.237 *** 2025-09-10 00:41:01.050745 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050756 | orchestrator | 2025-09-10 00:41:01.050766 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-10 00:41:01.050777 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.311) 0:00:26.549 *** 2025-09-10 00:41:01.050788 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050798 | orchestrator | 2025-09-10 00:41:01.050809 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-10 00:41:01.050820 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.120) 0:00:26.669 *** 2025-09-10 00:41:01.050831 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 00:41:01.050841 | orchestrator |  "ceph_osd_devices": { 2025-09-10 00:41:01.050852 | orchestrator |  "sdb": { 2025-09-10 00:41:01.050863 | orchestrator |  "osd_lvm_uuid": "20419d67-2a88-5ee6-832e-dd0a34a7687a" 2025-09-10 00:41:01.050874 | orchestrator |  }, 2025-09-10 00:41:01.050884 | orchestrator |  "sdc": { 2025-09-10 00:41:01.050903 | orchestrator |  "osd_lvm_uuid": "28e77ae9-929e-5c68-8a2a-91f3bea00aca" 2025-09-10 00:41:01.050914 | orchestrator |  } 2025-09-10 00:41:01.050925 | orchestrator |  } 2025-09-10 00:41:01.050936 | orchestrator | } 2025-09-10 00:41:01.050947 | orchestrator | 2025-09-10 00:41:01.050957 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-10 00:41:01.050968 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.121) 0:00:26.791 *** 2025-09-10 00:41:01.050979 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.050989 | orchestrator | 2025-09-10 00:41:01.051007 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-10 00:41:01.051018 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.115) 0:00:26.907 *** 2025-09-10 00:41:01.051029 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.051039 | orchestrator | 2025-09-10 00:41:01.051050 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-10 00:41:01.051061 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.104) 0:00:27.011 *** 2025-09-10 00:41:01.051071 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:41:01.051082 | orchestrator | 2025-09-10 00:41:01.051093 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-10 00:41:01.051103 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.103) 0:00:27.115 *** 2025-09-10 00:41:01.051114 | orchestrator | changed: [testbed-node-4] => { 2025-09-10 00:41:01.051124 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-10 00:41:01.051135 | orchestrator |  "ceph_osd_devices": { 2025-09-10 00:41:01.051145 | orchestrator |  "sdb": { 2025-09-10 00:41:01.051157 | orchestrator |  "osd_lvm_uuid": "20419d67-2a88-5ee6-832e-dd0a34a7687a" 2025-09-10 00:41:01.051172 | orchestrator |  }, 2025-09-10 00:41:01.051183 | orchestrator |  "sdc": { 2025-09-10 00:41:01.051202 | orchestrator |  "osd_lvm_uuid": "28e77ae9-929e-5c68-8a2a-91f3bea00aca" 2025-09-10 00:41:01.051214 | orchestrator |  } 2025-09-10 00:41:01.051225 | orchestrator |  }, 2025-09-10 00:41:01.051236 | orchestrator |  "lvm_volumes": [ 2025-09-10 00:41:01.051246 | orchestrator |  { 2025-09-10 00:41:01.051257 | orchestrator |  "data": "osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a", 2025-09-10 00:41:01.051268 | orchestrator |  "data_vg": "ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a" 2025-09-10 00:41:01.051278 | orchestrator |  }, 2025-09-10 00:41:01.051289 | orchestrator |  { 2025-09-10 00:41:01.051300 | orchestrator |  "data": "osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca", 2025-09-10 00:41:01.051311 | orchestrator |  "data_vg": "ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca" 2025-09-10 00:41:01.051321 | orchestrator |  } 2025-09-10 00:41:01.051332 | orchestrator |  ] 2025-09-10 00:41:01.051343 | orchestrator |  } 2025-09-10 00:41:01.051354 | orchestrator | } 2025-09-10 00:41:01.051364 | orchestrator | 2025-09-10 00:41:01.051375 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-10 00:41:01.051386 | orchestrator | Wednesday 10 September 2025 00:40:58 +0000 (0:00:00.169) 0:00:27.285 *** 2025-09-10 00:41:01.051400 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-10 00:41:01.051416 | orchestrator | 2025-09-10 00:41:01.051427 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-10 00:41:01.051437 | orchestrator | 2025-09-10 00:41:01.051451 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-10 00:41:01.051467 | orchestrator | Wednesday 10 September 2025 00:40:59 +0000 (0:00:00.956) 0:00:28.241 *** 2025-09-10 00:41:01.051478 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-10 00:41:01.051489 | orchestrator | 2025-09-10 00:41:01.051499 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-10 00:41:01.051510 | orchestrator | Wednesday 10 September 2025 00:41:00 +0000 (0:00:00.371) 0:00:28.613 *** 2025-09-10 00:41:01.051528 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:41:01.051538 | orchestrator | 2025-09-10 00:41:01.051549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:01.051560 | orchestrator | Wednesday 10 September 2025 00:41:00 +0000 (0:00:00.579) 0:00:29.193 *** 2025-09-10 00:41:01.051571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-10 00:41:01.051581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-10 00:41:01.051607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-10 00:41:01.051618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-10 00:41:01.051629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-10 00:41:01.051640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-10 00:41:01.051657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-10 00:41:09.100900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-10 00:41:09.100994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-10 00:41:09.101009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-10 00:41:09.101020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-10 00:41:09.101031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-10 00:41:09.101042 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-10 00:41:09.101053 | orchestrator | 2025-09-10 00:41:09.101065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101076 | orchestrator | Wednesday 10 September 2025 00:41:01 +0000 (0:00:00.341) 0:00:29.534 *** 2025-09-10 00:41:09.101087 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101098 | orchestrator | 2025-09-10 00:41:09.101109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101120 | orchestrator | Wednesday 10 September 2025 00:41:01 +0000 (0:00:00.166) 0:00:29.701 *** 2025-09-10 00:41:09.101131 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101142 | orchestrator | 2025-09-10 00:41:09.101153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101164 | orchestrator | Wednesday 10 September 2025 00:41:01 +0000 (0:00:00.171) 0:00:29.872 *** 2025-09-10 00:41:09.101174 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101185 | orchestrator | 2025-09-10 00:41:09.101196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101207 | orchestrator | Wednesday 10 September 2025 00:41:01 +0000 (0:00:00.191) 0:00:30.063 *** 2025-09-10 00:41:09.101217 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101228 | orchestrator | 2025-09-10 00:41:09.101239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101250 | orchestrator | Wednesday 10 September 2025 00:41:01 +0000 (0:00:00.175) 0:00:30.239 *** 2025-09-10 00:41:09.101260 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101271 | orchestrator | 2025-09-10 00:41:09.101282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101292 | orchestrator | Wednesday 10 September 2025 00:41:01 +0000 (0:00:00.154) 0:00:30.394 *** 2025-09-10 00:41:09.101303 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101314 | orchestrator | 2025-09-10 00:41:09.101325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101335 | orchestrator | Wednesday 10 September 2025 00:41:02 +0000 (0:00:00.225) 0:00:30.619 *** 2025-09-10 00:41:09.101346 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101377 | orchestrator | 2025-09-10 00:41:09.101389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101400 | orchestrator | Wednesday 10 September 2025 00:41:02 +0000 (0:00:00.183) 0:00:30.802 *** 2025-09-10 00:41:09.101410 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.101421 | orchestrator | 2025-09-10 00:41:09.101448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101461 | orchestrator | Wednesday 10 September 2025 00:41:02 +0000 (0:00:00.193) 0:00:30.995 *** 2025-09-10 00:41:09.101474 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de) 2025-09-10 00:41:09.101488 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de) 2025-09-10 00:41:09.101501 | orchestrator | 2025-09-10 00:41:09.101514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101526 | orchestrator | Wednesday 10 September 2025 00:41:03 +0000 (0:00:00.567) 0:00:31.563 *** 2025-09-10 00:41:09.101539 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c) 2025-09-10 00:41:09.101552 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c) 2025-09-10 00:41:09.101564 | orchestrator | 2025-09-10 00:41:09.101577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101616 | orchestrator | Wednesday 10 September 2025 00:41:03 +0000 (0:00:00.641) 0:00:32.205 *** 2025-09-10 00:41:09.101632 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901) 2025-09-10 00:41:09.101644 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901) 2025-09-10 00:41:09.101656 | orchestrator | 2025-09-10 00:41:09.101668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101680 | orchestrator | Wednesday 10 September 2025 00:41:04 +0000 (0:00:00.439) 0:00:32.644 *** 2025-09-10 00:41:09.101693 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c) 2025-09-10 00:41:09.101706 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c) 2025-09-10 00:41:09.101718 | orchestrator | 2025-09-10 00:41:09.101731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:41:09.101743 | orchestrator | Wednesday 10 September 2025 00:41:04 +0000 (0:00:00.399) 0:00:33.043 *** 2025-09-10 00:41:09.101755 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-10 00:41:09.101767 | orchestrator | 2025-09-10 00:41:09.101780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.101793 | orchestrator | Wednesday 10 September 2025 00:41:04 +0000 (0:00:00.358) 0:00:33.402 *** 2025-09-10 00:41:09.101822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-10 00:41:09.101834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-10 00:41:09.101844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-10 00:41:09.101855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-10 00:41:09.101865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-10 00:41:09.101876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-10 00:41:09.101886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-10 00:41:09.101897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-10 00:41:09.101908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-10 00:41:09.101928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-10 00:41:09.101938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-10 00:41:09.101949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-10 00:41:09.101959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-10 00:41:09.101970 | orchestrator | 2025-09-10 00:41:09.101980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.101991 | orchestrator | Wednesday 10 September 2025 00:41:05 +0000 (0:00:00.394) 0:00:33.796 *** 2025-09-10 00:41:09.102002 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102012 | orchestrator | 2025-09-10 00:41:09.102062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102074 | orchestrator | Wednesday 10 September 2025 00:41:05 +0000 (0:00:00.219) 0:00:34.016 *** 2025-09-10 00:41:09.102084 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102095 | orchestrator | 2025-09-10 00:41:09.102106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102116 | orchestrator | Wednesday 10 September 2025 00:41:05 +0000 (0:00:00.203) 0:00:34.220 *** 2025-09-10 00:41:09.102127 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102137 | orchestrator | 2025-09-10 00:41:09.102148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102158 | orchestrator | Wednesday 10 September 2025 00:41:05 +0000 (0:00:00.188) 0:00:34.408 *** 2025-09-10 00:41:09.102169 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102180 | orchestrator | 2025-09-10 00:41:09.102190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102201 | orchestrator | Wednesday 10 September 2025 00:41:06 +0000 (0:00:00.179) 0:00:34.588 *** 2025-09-10 00:41:09.102211 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102222 | orchestrator | 2025-09-10 00:41:09.102232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102243 | orchestrator | Wednesday 10 September 2025 00:41:06 +0000 (0:00:00.186) 0:00:34.775 *** 2025-09-10 00:41:09.102254 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102264 | orchestrator | 2025-09-10 00:41:09.102275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102285 | orchestrator | Wednesday 10 September 2025 00:41:06 +0000 (0:00:00.657) 0:00:35.432 *** 2025-09-10 00:41:09.102296 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102307 | orchestrator | 2025-09-10 00:41:09.102317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102328 | orchestrator | Wednesday 10 September 2025 00:41:07 +0000 (0:00:00.267) 0:00:35.700 *** 2025-09-10 00:41:09.102338 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102356 | orchestrator | 2025-09-10 00:41:09.102375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102394 | orchestrator | Wednesday 10 September 2025 00:41:07 +0000 (0:00:00.213) 0:00:35.914 *** 2025-09-10 00:41:09.102411 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-10 00:41:09.102422 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-10 00:41:09.102433 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-10 00:41:09.102443 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-10 00:41:09.102454 | orchestrator | 2025-09-10 00:41:09.102464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102475 | orchestrator | Wednesday 10 September 2025 00:41:08 +0000 (0:00:00.735) 0:00:36.649 *** 2025-09-10 00:41:09.102485 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102496 | orchestrator | 2025-09-10 00:41:09.102507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102525 | orchestrator | Wednesday 10 September 2025 00:41:08 +0000 (0:00:00.221) 0:00:36.870 *** 2025-09-10 00:41:09.102536 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102547 | orchestrator | 2025-09-10 00:41:09.102557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102568 | orchestrator | Wednesday 10 September 2025 00:41:08 +0000 (0:00:00.327) 0:00:37.198 *** 2025-09-10 00:41:09.102579 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102608 | orchestrator | 2025-09-10 00:41:09.102621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:41:09.102632 | orchestrator | Wednesday 10 September 2025 00:41:08 +0000 (0:00:00.186) 0:00:37.385 *** 2025-09-10 00:41:09.102649 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:09.102661 | orchestrator | 2025-09-10 00:41:09.102671 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-10 00:41:09.102690 | orchestrator | Wednesday 10 September 2025 00:41:09 +0000 (0:00:00.200) 0:00:37.585 *** 2025-09-10 00:41:13.146126 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-10 00:41:13.146216 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-10 00:41:13.146230 | orchestrator | 2025-09-10 00:41:13.146243 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-10 00:41:13.146253 | orchestrator | Wednesday 10 September 2025 00:41:09 +0000 (0:00:00.165) 0:00:37.751 *** 2025-09-10 00:41:13.146264 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146275 | orchestrator | 2025-09-10 00:41:13.146286 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-10 00:41:13.146297 | orchestrator | Wednesday 10 September 2025 00:41:09 +0000 (0:00:00.129) 0:00:37.880 *** 2025-09-10 00:41:13.146308 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146319 | orchestrator | 2025-09-10 00:41:13.146330 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-10 00:41:13.146341 | orchestrator | Wednesday 10 September 2025 00:41:09 +0000 (0:00:00.135) 0:00:38.016 *** 2025-09-10 00:41:13.146351 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146362 | orchestrator | 2025-09-10 00:41:13.146372 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-10 00:41:13.146383 | orchestrator | Wednesday 10 September 2025 00:41:09 +0000 (0:00:00.133) 0:00:38.149 *** 2025-09-10 00:41:13.146394 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:41:13.146405 | orchestrator | 2025-09-10 00:41:13.146416 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-10 00:41:13.146427 | orchestrator | Wednesday 10 September 2025 00:41:10 +0000 (0:00:00.410) 0:00:38.560 *** 2025-09-10 00:41:13.146438 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '36dac960-67a7-54a4-bbd2-b6f8976b18f7'}}) 2025-09-10 00:41:13.146450 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f4115e81-926e-57fb-8145-65084efa4466'}}) 2025-09-10 00:41:13.146461 | orchestrator | 2025-09-10 00:41:13.146471 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-10 00:41:13.146482 | orchestrator | Wednesday 10 September 2025 00:41:10 +0000 (0:00:00.221) 0:00:38.782 *** 2025-09-10 00:41:13.146493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '36dac960-67a7-54a4-bbd2-b6f8976b18f7'}})  2025-09-10 00:41:13.146505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f4115e81-926e-57fb-8145-65084efa4466'}})  2025-09-10 00:41:13.146516 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146527 | orchestrator | 2025-09-10 00:41:13.146567 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-10 00:41:13.146579 | orchestrator | Wednesday 10 September 2025 00:41:10 +0000 (0:00:00.183) 0:00:38.966 *** 2025-09-10 00:41:13.146633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '36dac960-67a7-54a4-bbd2-b6f8976b18f7'}})  2025-09-10 00:41:13.146669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f4115e81-926e-57fb-8145-65084efa4466'}})  2025-09-10 00:41:13.146682 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146695 | orchestrator | 2025-09-10 00:41:13.146707 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-10 00:41:13.146718 | orchestrator | Wednesday 10 September 2025 00:41:10 +0000 (0:00:00.158) 0:00:39.124 *** 2025-09-10 00:41:13.146731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '36dac960-67a7-54a4-bbd2-b6f8976b18f7'}})  2025-09-10 00:41:13.146743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f4115e81-926e-57fb-8145-65084efa4466'}})  2025-09-10 00:41:13.146755 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146767 | orchestrator | 2025-09-10 00:41:13.146779 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-10 00:41:13.146792 | orchestrator | Wednesday 10 September 2025 00:41:10 +0000 (0:00:00.145) 0:00:39.269 *** 2025-09-10 00:41:13.146804 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:41:13.146815 | orchestrator | 2025-09-10 00:41:13.146827 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-10 00:41:13.146839 | orchestrator | Wednesday 10 September 2025 00:41:10 +0000 (0:00:00.128) 0:00:39.398 *** 2025-09-10 00:41:13.146851 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:41:13.146863 | orchestrator | 2025-09-10 00:41:13.146875 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-10 00:41:13.146887 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.142) 0:00:39.540 *** 2025-09-10 00:41:13.146899 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146911 | orchestrator | 2025-09-10 00:41:13.146923 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-10 00:41:13.146935 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.136) 0:00:39.677 *** 2025-09-10 00:41:13.146947 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.146959 | orchestrator | 2025-09-10 00:41:13.146971 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-10 00:41:13.146983 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.130) 0:00:39.808 *** 2025-09-10 00:41:13.146994 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.147004 | orchestrator | 2025-09-10 00:41:13.147015 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-10 00:41:13.147025 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.208) 0:00:40.017 *** 2025-09-10 00:41:13.147036 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 00:41:13.147047 | orchestrator |  "ceph_osd_devices": { 2025-09-10 00:41:13.147057 | orchestrator |  "sdb": { 2025-09-10 00:41:13.147068 | orchestrator |  "osd_lvm_uuid": "36dac960-67a7-54a4-bbd2-b6f8976b18f7" 2025-09-10 00:41:13.147094 | orchestrator |  }, 2025-09-10 00:41:13.147105 | orchestrator |  "sdc": { 2025-09-10 00:41:13.147116 | orchestrator |  "osd_lvm_uuid": "f4115e81-926e-57fb-8145-65084efa4466" 2025-09-10 00:41:13.147126 | orchestrator |  } 2025-09-10 00:41:13.147137 | orchestrator |  } 2025-09-10 00:41:13.147148 | orchestrator | } 2025-09-10 00:41:13.147159 | orchestrator | 2025-09-10 00:41:13.147169 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-10 00:41:13.147180 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.117) 0:00:40.134 *** 2025-09-10 00:41:13.147191 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.147201 | orchestrator | 2025-09-10 00:41:13.147211 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-10 00:41:13.147222 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.107) 0:00:40.242 *** 2025-09-10 00:41:13.147233 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.147243 | orchestrator | 2025-09-10 00:41:13.147254 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-10 00:41:13.147272 | orchestrator | Wednesday 10 September 2025 00:41:11 +0000 (0:00:00.216) 0:00:40.459 *** 2025-09-10 00:41:13.147283 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:41:13.147293 | orchestrator | 2025-09-10 00:41:13.147304 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-10 00:41:13.147315 | orchestrator | Wednesday 10 September 2025 00:41:12 +0000 (0:00:00.093) 0:00:40.553 *** 2025-09-10 00:41:13.147325 | orchestrator | changed: [testbed-node-5] => { 2025-09-10 00:41:13.147336 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-10 00:41:13.147346 | orchestrator |  "ceph_osd_devices": { 2025-09-10 00:41:13.147357 | orchestrator |  "sdb": { 2025-09-10 00:41:13.147368 | orchestrator |  "osd_lvm_uuid": "36dac960-67a7-54a4-bbd2-b6f8976b18f7" 2025-09-10 00:41:13.147379 | orchestrator |  }, 2025-09-10 00:41:13.147389 | orchestrator |  "sdc": { 2025-09-10 00:41:13.147400 | orchestrator |  "osd_lvm_uuid": "f4115e81-926e-57fb-8145-65084efa4466" 2025-09-10 00:41:13.147411 | orchestrator |  } 2025-09-10 00:41:13.147421 | orchestrator |  }, 2025-09-10 00:41:13.147432 | orchestrator |  "lvm_volumes": [ 2025-09-10 00:41:13.147442 | orchestrator |  { 2025-09-10 00:41:13.147453 | orchestrator |  "data": "osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7", 2025-09-10 00:41:13.147464 | orchestrator |  "data_vg": "ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7" 2025-09-10 00:41:13.147474 | orchestrator |  }, 2025-09-10 00:41:13.147485 | orchestrator |  { 2025-09-10 00:41:13.147496 | orchestrator |  "data": "osd-block-f4115e81-926e-57fb-8145-65084efa4466", 2025-09-10 00:41:13.147506 | orchestrator |  "data_vg": "ceph-f4115e81-926e-57fb-8145-65084efa4466" 2025-09-10 00:41:13.147518 | orchestrator |  } 2025-09-10 00:41:13.147528 | orchestrator |  ] 2025-09-10 00:41:13.147539 | orchestrator |  } 2025-09-10 00:41:13.147553 | orchestrator | } 2025-09-10 00:41:13.147564 | orchestrator | 2025-09-10 00:41:13.147575 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-10 00:41:13.147586 | orchestrator | Wednesday 10 September 2025 00:41:12 +0000 (0:00:00.149) 0:00:40.703 *** 2025-09-10 00:41:13.147613 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-10 00:41:13.147624 | orchestrator | 2025-09-10 00:41:13.147634 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:41:13.147653 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 00:41:13.147665 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 00:41:13.147676 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 00:41:13.147687 | orchestrator | 2025-09-10 00:41:13.147698 | orchestrator | 2025-09-10 00:41:13.147708 | orchestrator | 2025-09-10 00:41:13.147719 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:41:13.147730 | orchestrator | Wednesday 10 September 2025 00:41:13 +0000 (0:00:00.921) 0:00:41.624 *** 2025-09-10 00:41:13.147740 | orchestrator | =============================================================================== 2025-09-10 00:41:13.147751 | orchestrator | Write configuration file ------------------------------------------------ 4.25s 2025-09-10 00:41:13.147761 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-09-10 00:41:13.147772 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-09-10 00:41:13.147782 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-09-10 00:41:13.147793 | orchestrator | Get initial list of available block devices ----------------------------- 1.05s 2025-09-10 00:41:13.147817 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-09-10 00:41:13.147828 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-09-10 00:41:13.147838 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-09-10 00:41:13.147849 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-10 00:41:13.147859 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-09-10 00:41:13.147870 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-09-10 00:41:13.147880 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.67s 2025-09-10 00:41:13.147891 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-10 00:41:13.147902 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.66s 2025-09-10 00:41:13.147919 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-09-10 00:41:13.389005 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-10 00:41:13.389087 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-10 00:41:13.389101 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.62s 2025-09-10 00:41:13.389112 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2025-09-10 00:41:13.389123 | orchestrator | Print configuration data ------------------------------------------------ 0.58s 2025-09-10 00:41:35.990678 | orchestrator | 2025-09-10 00:41:35 | INFO  | Task 50a6c8e0-2d8e-48b9-adcb-8ea9eb98532d (sync inventory) is running in background. Output coming soon. 2025-09-10 00:41:59.887059 | orchestrator | 2025-09-10 00:41:37 | INFO  | Starting group_vars file reorganization 2025-09-10 00:41:59.887154 | orchestrator | 2025-09-10 00:41:37 | INFO  | Moved 0 file(s) to their respective directories 2025-09-10 00:41:59.887164 | orchestrator | 2025-09-10 00:41:37 | INFO  | Group_vars file reorganization completed 2025-09-10 00:41:59.887172 | orchestrator | 2025-09-10 00:41:39 | INFO  | Starting variable preparation from inventory 2025-09-10 00:41:59.887178 | orchestrator | 2025-09-10 00:41:43 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-10 00:41:59.887185 | orchestrator | 2025-09-10 00:41:43 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-10 00:41:59.887192 | orchestrator | 2025-09-10 00:41:43 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-10 00:41:59.887198 | orchestrator | 2025-09-10 00:41:43 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-10 00:41:59.887205 | orchestrator | 2025-09-10 00:41:43 | INFO  | Variable preparation completed 2025-09-10 00:41:59.887211 | orchestrator | 2025-09-10 00:41:43 | INFO  | Starting inventory overwrite handling 2025-09-10 00:41:59.887218 | orchestrator | 2025-09-10 00:41:43 | INFO  | Handling group overwrites in 99-overwrite 2025-09-10 00:41:59.887225 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removing group frr:children from 60-generic 2025-09-10 00:41:59.887231 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removing group storage:children from 50-kolla 2025-09-10 00:41:59.887238 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-10 00:41:59.887244 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-10 00:41:59.887251 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-10 00:41:59.887257 | orchestrator | 2025-09-10 00:41:43 | INFO  | Handling group overwrites in 20-roles 2025-09-10 00:41:59.887263 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-10 00:41:59.887293 | orchestrator | 2025-09-10 00:41:43 | INFO  | Removed 6 group(s) in total 2025-09-10 00:41:59.887300 | orchestrator | 2025-09-10 00:41:43 | INFO  | Inventory overwrite handling completed 2025-09-10 00:41:59.887306 | orchestrator | 2025-09-10 00:41:44 | INFO  | Starting merge of inventory files 2025-09-10 00:41:59.887312 | orchestrator | 2025-09-10 00:41:44 | INFO  | Inventory files merged successfully 2025-09-10 00:41:59.887318 | orchestrator | 2025-09-10 00:41:48 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-10 00:41:59.887325 | orchestrator | 2025-09-10 00:41:58 | INFO  | Successfully wrote ClusterShell configuration 2025-09-10 00:41:59.887331 | orchestrator | [master 5424441] 2025-09-10-00-41 2025-09-10 00:41:59.887339 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-10 00:42:01.506761 | orchestrator | 2025-09-10 00:42:01 | INFO  | Task d41de35c-bf45-4351-9aec-ad7604a2e5e6 (ceph-create-lvm-devices) was prepared for execution. 2025-09-10 00:42:01.506863 | orchestrator | 2025-09-10 00:42:01 | INFO  | It takes a moment until task d41de35c-bf45-4351-9aec-ad7604a2e5e6 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-10 00:42:13.389099 | orchestrator | 2025-09-10 00:42:13.389205 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-10 00:42:13.389222 | orchestrator | 2025-09-10 00:42:13.389235 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-10 00:42:13.389247 | orchestrator | Wednesday 10 September 2025 00:42:05 +0000 (0:00:00.331) 0:00:00.331 *** 2025-09-10 00:42:13.389258 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-10 00:42:13.389270 | orchestrator | 2025-09-10 00:42:13.389281 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-10 00:42:13.389292 | orchestrator | Wednesday 10 September 2025 00:42:05 +0000 (0:00:00.242) 0:00:00.573 *** 2025-09-10 00:42:13.389303 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:13.389315 | orchestrator | 2025-09-10 00:42:13.389326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389337 | orchestrator | Wednesday 10 September 2025 00:42:06 +0000 (0:00:00.234) 0:00:00.808 *** 2025-09-10 00:42:13.389347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-10 00:42:13.389360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-10 00:42:13.389371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-10 00:42:13.389382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-10 00:42:13.389393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-10 00:42:13.389404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-10 00:42:13.389414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-10 00:42:13.389425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-10 00:42:13.389436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-10 00:42:13.389446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-10 00:42:13.389457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-10 00:42:13.389468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-10 00:42:13.389479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-10 00:42:13.389489 | orchestrator | 2025-09-10 00:42:13.389500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389535 | orchestrator | Wednesday 10 September 2025 00:42:06 +0000 (0:00:00.444) 0:00:01.252 *** 2025-09-10 00:42:13.389546 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389557 | orchestrator | 2025-09-10 00:42:13.389568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389631 | orchestrator | Wednesday 10 September 2025 00:42:07 +0000 (0:00:00.506) 0:00:01.759 *** 2025-09-10 00:42:13.389645 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389658 | orchestrator | 2025-09-10 00:42:13.389671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389683 | orchestrator | Wednesday 10 September 2025 00:42:07 +0000 (0:00:00.204) 0:00:01.963 *** 2025-09-10 00:42:13.389702 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389715 | orchestrator | 2025-09-10 00:42:13.389727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389740 | orchestrator | Wednesday 10 September 2025 00:42:07 +0000 (0:00:00.189) 0:00:02.153 *** 2025-09-10 00:42:13.389752 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389764 | orchestrator | 2025-09-10 00:42:13.389776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389788 | orchestrator | Wednesday 10 September 2025 00:42:07 +0000 (0:00:00.181) 0:00:02.335 *** 2025-09-10 00:42:13.389801 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389814 | orchestrator | 2025-09-10 00:42:13.389826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389839 | orchestrator | Wednesday 10 September 2025 00:42:07 +0000 (0:00:00.203) 0:00:02.538 *** 2025-09-10 00:42:13.389852 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389864 | orchestrator | 2025-09-10 00:42:13.389877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389889 | orchestrator | Wednesday 10 September 2025 00:42:08 +0000 (0:00:00.202) 0:00:02.741 *** 2025-09-10 00:42:13.389902 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389915 | orchestrator | 2025-09-10 00:42:13.389927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389939 | orchestrator | Wednesday 10 September 2025 00:42:08 +0000 (0:00:00.200) 0:00:02.942 *** 2025-09-10 00:42:13.389952 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.389964 | orchestrator | 2025-09-10 00:42:13.389977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.389990 | orchestrator | Wednesday 10 September 2025 00:42:08 +0000 (0:00:00.183) 0:00:03.126 *** 2025-09-10 00:42:13.390000 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28) 2025-09-10 00:42:13.390012 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28) 2025-09-10 00:42:13.390077 | orchestrator | 2025-09-10 00:42:13.390089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.390100 | orchestrator | Wednesday 10 September 2025 00:42:08 +0000 (0:00:00.380) 0:00:03.506 *** 2025-09-10 00:42:13.390128 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6) 2025-09-10 00:42:13.390140 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6) 2025-09-10 00:42:13.390151 | orchestrator | 2025-09-10 00:42:13.390161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.390172 | orchestrator | Wednesday 10 September 2025 00:42:09 +0000 (0:00:00.431) 0:00:03.938 *** 2025-09-10 00:42:13.390183 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd) 2025-09-10 00:42:13.390194 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd) 2025-09-10 00:42:13.390204 | orchestrator | 2025-09-10 00:42:13.390215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.390236 | orchestrator | Wednesday 10 September 2025 00:42:09 +0000 (0:00:00.641) 0:00:04.579 *** 2025-09-10 00:42:13.390246 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757) 2025-09-10 00:42:13.390257 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757) 2025-09-10 00:42:13.390268 | orchestrator | 2025-09-10 00:42:13.390278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:13.390289 | orchestrator | Wednesday 10 September 2025 00:42:10 +0000 (0:00:00.943) 0:00:05.523 *** 2025-09-10 00:42:13.390299 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-10 00:42:13.390310 | orchestrator | 2025-09-10 00:42:13.390321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390332 | orchestrator | Wednesday 10 September 2025 00:42:11 +0000 (0:00:00.400) 0:00:05.924 *** 2025-09-10 00:42:13.390342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-10 00:42:13.390353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-10 00:42:13.390364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-10 00:42:13.390374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-10 00:42:13.390385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-10 00:42:13.390395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-10 00:42:13.390412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-10 00:42:13.390430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-10 00:42:13.390446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-10 00:42:13.390461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-10 00:42:13.390477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-10 00:42:13.390494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-10 00:42:13.390509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-10 00:42:13.390526 | orchestrator | 2025-09-10 00:42:13.390542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390558 | orchestrator | Wednesday 10 September 2025 00:42:11 +0000 (0:00:00.457) 0:00:06.381 *** 2025-09-10 00:42:13.390576 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390625 | orchestrator | 2025-09-10 00:42:13.390644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390662 | orchestrator | Wednesday 10 September 2025 00:42:11 +0000 (0:00:00.201) 0:00:06.582 *** 2025-09-10 00:42:13.390679 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390698 | orchestrator | 2025-09-10 00:42:13.390716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390734 | orchestrator | Wednesday 10 September 2025 00:42:12 +0000 (0:00:00.202) 0:00:06.785 *** 2025-09-10 00:42:13.390749 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390760 | orchestrator | 2025-09-10 00:42:13.390770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390781 | orchestrator | Wednesday 10 September 2025 00:42:12 +0000 (0:00:00.205) 0:00:06.990 *** 2025-09-10 00:42:13.390791 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390802 | orchestrator | 2025-09-10 00:42:13.390813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390834 | orchestrator | Wednesday 10 September 2025 00:42:12 +0000 (0:00:00.223) 0:00:07.214 *** 2025-09-10 00:42:13.390845 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390856 | orchestrator | 2025-09-10 00:42:13.390867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390877 | orchestrator | Wednesday 10 September 2025 00:42:12 +0000 (0:00:00.205) 0:00:07.420 *** 2025-09-10 00:42:13.390888 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390898 | orchestrator | 2025-09-10 00:42:13.390909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390920 | orchestrator | Wednesday 10 September 2025 00:42:12 +0000 (0:00:00.211) 0:00:07.631 *** 2025-09-10 00:42:13.390930 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:13.390941 | orchestrator | 2025-09-10 00:42:13.390952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:13.390962 | orchestrator | Wednesday 10 September 2025 00:42:13 +0000 (0:00:00.197) 0:00:07.828 *** 2025-09-10 00:42:13.390983 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.298445 | orchestrator | 2025-09-10 00:42:22.298553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:22.298571 | orchestrator | Wednesday 10 September 2025 00:42:13 +0000 (0:00:00.216) 0:00:08.044 *** 2025-09-10 00:42:22.298584 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-10 00:42:22.298628 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-10 00:42:22.298640 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-10 00:42:22.298651 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-10 00:42:22.298663 | orchestrator | 2025-09-10 00:42:22.298674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:22.298685 | orchestrator | Wednesday 10 September 2025 00:42:14 +0000 (0:00:01.197) 0:00:09.242 *** 2025-09-10 00:42:22.298696 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.298707 | orchestrator | 2025-09-10 00:42:22.298718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:22.298729 | orchestrator | Wednesday 10 September 2025 00:42:14 +0000 (0:00:00.206) 0:00:09.449 *** 2025-09-10 00:42:22.298740 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.298751 | orchestrator | 2025-09-10 00:42:22.298762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:22.298773 | orchestrator | Wednesday 10 September 2025 00:42:15 +0000 (0:00:00.246) 0:00:09.695 *** 2025-09-10 00:42:22.298784 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.298794 | orchestrator | 2025-09-10 00:42:22.298806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:22.298817 | orchestrator | Wednesday 10 September 2025 00:42:15 +0000 (0:00:00.263) 0:00:09.959 *** 2025-09-10 00:42:22.298828 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.298839 | orchestrator | 2025-09-10 00:42:22.298849 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-10 00:42:22.298860 | orchestrator | Wednesday 10 September 2025 00:42:15 +0000 (0:00:00.215) 0:00:10.174 *** 2025-09-10 00:42:22.298871 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.298882 | orchestrator | 2025-09-10 00:42:22.298892 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-10 00:42:22.298904 | orchestrator | Wednesday 10 September 2025 00:42:15 +0000 (0:00:00.153) 0:00:10.328 *** 2025-09-10 00:42:22.298915 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4b73e898-cb4c-523f-8aca-971ee560c7ea'}}) 2025-09-10 00:42:22.298927 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2bea83b6-6800-529c-bdd8-a613f3421a6f'}}) 2025-09-10 00:42:22.298937 | orchestrator | 2025-09-10 00:42:22.298948 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-10 00:42:22.298959 | orchestrator | Wednesday 10 September 2025 00:42:15 +0000 (0:00:00.233) 0:00:10.561 *** 2025-09-10 00:42:22.298973 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'}) 2025-09-10 00:42:22.299009 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'}) 2025-09-10 00:42:22.299022 | orchestrator | 2025-09-10 00:42:22.299053 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-10 00:42:22.299072 | orchestrator | Wednesday 10 September 2025 00:42:18 +0000 (0:00:02.108) 0:00:12.669 *** 2025-09-10 00:42:22.299086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299112 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299124 | orchestrator | 2025-09-10 00:42:22.299137 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-10 00:42:22.299149 | orchestrator | Wednesday 10 September 2025 00:42:18 +0000 (0:00:00.218) 0:00:12.888 *** 2025-09-10 00:42:22.299161 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'}) 2025-09-10 00:42:22.299173 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'}) 2025-09-10 00:42:22.299185 | orchestrator | 2025-09-10 00:42:22.299196 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-10 00:42:22.299207 | orchestrator | Wednesday 10 September 2025 00:42:19 +0000 (0:00:01.594) 0:00:14.483 *** 2025-09-10 00:42:22.299218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299240 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299251 | orchestrator | 2025-09-10 00:42:22.299262 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-10 00:42:22.299272 | orchestrator | Wednesday 10 September 2025 00:42:19 +0000 (0:00:00.178) 0:00:14.661 *** 2025-09-10 00:42:22.299283 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299294 | orchestrator | 2025-09-10 00:42:22.299305 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-10 00:42:22.299334 | orchestrator | Wednesday 10 September 2025 00:42:20 +0000 (0:00:00.165) 0:00:14.827 *** 2025-09-10 00:42:22.299345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299366 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299377 | orchestrator | 2025-09-10 00:42:22.299388 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-10 00:42:22.299398 | orchestrator | Wednesday 10 September 2025 00:42:20 +0000 (0:00:00.420) 0:00:15.247 *** 2025-09-10 00:42:22.299409 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299419 | orchestrator | 2025-09-10 00:42:22.299430 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-10 00:42:22.299441 | orchestrator | Wednesday 10 September 2025 00:42:20 +0000 (0:00:00.137) 0:00:15.384 *** 2025-09-10 00:42:22.299451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299481 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299492 | orchestrator | 2025-09-10 00:42:22.299502 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-10 00:42:22.299513 | orchestrator | Wednesday 10 September 2025 00:42:20 +0000 (0:00:00.213) 0:00:15.597 *** 2025-09-10 00:42:22.299524 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299534 | orchestrator | 2025-09-10 00:42:22.299545 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-10 00:42:22.299556 | orchestrator | Wednesday 10 September 2025 00:42:21 +0000 (0:00:00.185) 0:00:15.783 *** 2025-09-10 00:42:22.299566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299577 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299588 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299633 | orchestrator | 2025-09-10 00:42:22.299644 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-10 00:42:22.299655 | orchestrator | Wednesday 10 September 2025 00:42:21 +0000 (0:00:00.223) 0:00:16.006 *** 2025-09-10 00:42:22.299666 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:22.299677 | orchestrator | 2025-09-10 00:42:22.299687 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-10 00:42:22.299698 | orchestrator | Wednesday 10 September 2025 00:42:21 +0000 (0:00:00.147) 0:00:16.153 *** 2025-09-10 00:42:22.299715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299737 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299748 | orchestrator | 2025-09-10 00:42:22.299758 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-10 00:42:22.299769 | orchestrator | Wednesday 10 September 2025 00:42:21 +0000 (0:00:00.160) 0:00:16.314 *** 2025-09-10 00:42:22.299780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299791 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299802 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299813 | orchestrator | 2025-09-10 00:42:22.299823 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-10 00:42:22.299834 | orchestrator | Wednesday 10 September 2025 00:42:21 +0000 (0:00:00.155) 0:00:16.470 *** 2025-09-10 00:42:22.299845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:22.299856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:22.299866 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299877 | orchestrator | 2025-09-10 00:42:22.299888 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-10 00:42:22.299899 | orchestrator | Wednesday 10 September 2025 00:42:21 +0000 (0:00:00.151) 0:00:16.621 *** 2025-09-10 00:42:22.299909 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299927 | orchestrator | 2025-09-10 00:42:22.299938 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-10 00:42:22.299949 | orchestrator | Wednesday 10 September 2025 00:42:22 +0000 (0:00:00.166) 0:00:16.788 *** 2025-09-10 00:42:22.299960 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:22.299970 | orchestrator | 2025-09-10 00:42:22.299988 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-10 00:42:29.271274 | orchestrator | Wednesday 10 September 2025 00:42:22 +0000 (0:00:00.169) 0:00:16.957 *** 2025-09-10 00:42:29.271382 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.271398 | orchestrator | 2025-09-10 00:42:29.271411 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-10 00:42:29.271423 | orchestrator | Wednesday 10 September 2025 00:42:22 +0000 (0:00:00.149) 0:00:17.107 *** 2025-09-10 00:42:29.271434 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 00:42:29.271446 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-10 00:42:29.271457 | orchestrator | } 2025-09-10 00:42:29.271468 | orchestrator | 2025-09-10 00:42:29.271480 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-10 00:42:29.271491 | orchestrator | Wednesday 10 September 2025 00:42:22 +0000 (0:00:00.551) 0:00:17.658 *** 2025-09-10 00:42:29.271501 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 00:42:29.271512 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-10 00:42:29.271523 | orchestrator | } 2025-09-10 00:42:29.271534 | orchestrator | 2025-09-10 00:42:29.271545 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-10 00:42:29.271556 | orchestrator | Wednesday 10 September 2025 00:42:23 +0000 (0:00:00.153) 0:00:17.812 *** 2025-09-10 00:42:29.271567 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 00:42:29.271578 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-10 00:42:29.271589 | orchestrator | } 2025-09-10 00:42:29.271655 | orchestrator | 2025-09-10 00:42:29.271667 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-10 00:42:29.271678 | orchestrator | Wednesday 10 September 2025 00:42:23 +0000 (0:00:00.148) 0:00:17.961 *** 2025-09-10 00:42:29.271689 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:29.271700 | orchestrator | 2025-09-10 00:42:29.271711 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-10 00:42:29.271722 | orchestrator | Wednesday 10 September 2025 00:42:24 +0000 (0:00:00.723) 0:00:18.684 *** 2025-09-10 00:42:29.271733 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:29.271744 | orchestrator | 2025-09-10 00:42:29.271755 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-10 00:42:29.271766 | orchestrator | Wednesday 10 September 2025 00:42:24 +0000 (0:00:00.494) 0:00:19.178 *** 2025-09-10 00:42:29.271777 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:29.271787 | orchestrator | 2025-09-10 00:42:29.271798 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-10 00:42:29.271811 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.525) 0:00:19.703 *** 2025-09-10 00:42:29.271823 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:29.271836 | orchestrator | 2025-09-10 00:42:29.271848 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-10 00:42:29.271861 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.192) 0:00:19.895 *** 2025-09-10 00:42:29.271873 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.271886 | orchestrator | 2025-09-10 00:42:29.271898 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-10 00:42:29.271911 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.164) 0:00:20.060 *** 2025-09-10 00:42:29.271923 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.271936 | orchestrator | 2025-09-10 00:42:29.271949 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-10 00:42:29.271962 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.157) 0:00:20.217 *** 2025-09-10 00:42:29.271974 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 00:42:29.272010 | orchestrator |  "vgs_report": { 2025-09-10 00:42:29.272024 | orchestrator |  "vg": [] 2025-09-10 00:42:29.272036 | orchestrator |  } 2025-09-10 00:42:29.272048 | orchestrator | } 2025-09-10 00:42:29.272061 | orchestrator | 2025-09-10 00:42:29.272073 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-10 00:42:29.272085 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.159) 0:00:20.377 *** 2025-09-10 00:42:29.272098 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272111 | orchestrator | 2025-09-10 00:42:29.272123 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-10 00:42:29.272135 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.140) 0:00:20.518 *** 2025-09-10 00:42:29.272147 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272159 | orchestrator | 2025-09-10 00:42:29.272170 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-10 00:42:29.272180 | orchestrator | Wednesday 10 September 2025 00:42:25 +0000 (0:00:00.114) 0:00:20.633 *** 2025-09-10 00:42:29.272191 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272201 | orchestrator | 2025-09-10 00:42:29.272212 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-10 00:42:29.272223 | orchestrator | Wednesday 10 September 2025 00:42:26 +0000 (0:00:00.345) 0:00:20.978 *** 2025-09-10 00:42:29.272233 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272244 | orchestrator | 2025-09-10 00:42:29.272255 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-10 00:42:29.272266 | orchestrator | Wednesday 10 September 2025 00:42:26 +0000 (0:00:00.165) 0:00:21.144 *** 2025-09-10 00:42:29.272276 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272287 | orchestrator | 2025-09-10 00:42:29.272314 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-10 00:42:29.272326 | orchestrator | Wednesday 10 September 2025 00:42:26 +0000 (0:00:00.141) 0:00:21.286 *** 2025-09-10 00:42:29.272336 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272347 | orchestrator | 2025-09-10 00:42:29.272358 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-10 00:42:29.272368 | orchestrator | Wednesday 10 September 2025 00:42:26 +0000 (0:00:00.146) 0:00:21.433 *** 2025-09-10 00:42:29.272379 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272390 | orchestrator | 2025-09-10 00:42:29.272401 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-10 00:42:29.272411 | orchestrator | Wednesday 10 September 2025 00:42:26 +0000 (0:00:00.178) 0:00:21.612 *** 2025-09-10 00:42:29.272422 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272433 | orchestrator | 2025-09-10 00:42:29.272444 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-10 00:42:29.272472 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.142) 0:00:21.754 *** 2025-09-10 00:42:29.272483 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272494 | orchestrator | 2025-09-10 00:42:29.272505 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-10 00:42:29.272516 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.151) 0:00:21.906 *** 2025-09-10 00:42:29.272526 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272537 | orchestrator | 2025-09-10 00:42:29.272548 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-10 00:42:29.272558 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.138) 0:00:22.044 *** 2025-09-10 00:42:29.272569 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272579 | orchestrator | 2025-09-10 00:42:29.272590 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-10 00:42:29.272618 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.146) 0:00:22.191 *** 2025-09-10 00:42:29.272629 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272640 | orchestrator | 2025-09-10 00:42:29.272660 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-10 00:42:29.272671 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.139) 0:00:22.330 *** 2025-09-10 00:42:29.272682 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272692 | orchestrator | 2025-09-10 00:42:29.272703 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-10 00:42:29.272714 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.139) 0:00:22.470 *** 2025-09-10 00:42:29.272725 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272735 | orchestrator | 2025-09-10 00:42:29.272746 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-10 00:42:29.272756 | orchestrator | Wednesday 10 September 2025 00:42:27 +0000 (0:00:00.143) 0:00:22.614 *** 2025-09-10 00:42:29.272768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:29.272781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:29.272791 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272802 | orchestrator | 2025-09-10 00:42:29.272813 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-10 00:42:29.272823 | orchestrator | Wednesday 10 September 2025 00:42:28 +0000 (0:00:00.392) 0:00:23.006 *** 2025-09-10 00:42:29.272834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:29.272845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:29.272856 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272866 | orchestrator | 2025-09-10 00:42:29.272877 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-10 00:42:29.272887 | orchestrator | Wednesday 10 September 2025 00:42:28 +0000 (0:00:00.189) 0:00:23.195 *** 2025-09-10 00:42:29.272903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:29.272914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:29.272925 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.272935 | orchestrator | 2025-09-10 00:42:29.272946 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-10 00:42:29.272957 | orchestrator | Wednesday 10 September 2025 00:42:28 +0000 (0:00:00.150) 0:00:23.346 *** 2025-09-10 00:42:29.272968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:29.272978 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:29.272989 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.273000 | orchestrator | 2025-09-10 00:42:29.273010 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-10 00:42:29.273021 | orchestrator | Wednesday 10 September 2025 00:42:28 +0000 (0:00:00.228) 0:00:23.574 *** 2025-09-10 00:42:29.273031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:29.273042 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:29.273053 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:29.273070 | orchestrator | 2025-09-10 00:42:29.273081 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-10 00:42:29.273091 | orchestrator | Wednesday 10 September 2025 00:42:29 +0000 (0:00:00.162) 0:00:23.737 *** 2025-09-10 00:42:29.273102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:29.273119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:34.755275 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:34.755375 | orchestrator | 2025-09-10 00:42:34.755387 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-10 00:42:34.755398 | orchestrator | Wednesday 10 September 2025 00:42:29 +0000 (0:00:00.189) 0:00:23.927 *** 2025-09-10 00:42:34.755407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:34.755417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:34.755425 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:34.755433 | orchestrator | 2025-09-10 00:42:34.755441 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-10 00:42:34.755449 | orchestrator | Wednesday 10 September 2025 00:42:29 +0000 (0:00:00.160) 0:00:24.088 *** 2025-09-10 00:42:34.755458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:34.755466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:34.755474 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:34.755482 | orchestrator | 2025-09-10 00:42:34.755490 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-10 00:42:34.755498 | orchestrator | Wednesday 10 September 2025 00:42:29 +0000 (0:00:00.215) 0:00:24.304 *** 2025-09-10 00:42:34.755506 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:34.755514 | orchestrator | 2025-09-10 00:42:34.755522 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-10 00:42:34.755530 | orchestrator | Wednesday 10 September 2025 00:42:30 +0000 (0:00:00.507) 0:00:24.812 *** 2025-09-10 00:42:34.755538 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:34.755546 | orchestrator | 2025-09-10 00:42:34.755553 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-10 00:42:34.755561 | orchestrator | Wednesday 10 September 2025 00:42:30 +0000 (0:00:00.513) 0:00:25.325 *** 2025-09-10 00:42:34.755569 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:42:34.755577 | orchestrator | 2025-09-10 00:42:34.755585 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-10 00:42:34.755635 | orchestrator | Wednesday 10 September 2025 00:42:30 +0000 (0:00:00.158) 0:00:25.484 *** 2025-09-10 00:42:34.755644 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'vg_name': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'}) 2025-09-10 00:42:34.755653 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'vg_name': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'}) 2025-09-10 00:42:34.755661 | orchestrator | 2025-09-10 00:42:34.755669 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-10 00:42:34.755677 | orchestrator | Wednesday 10 September 2025 00:42:30 +0000 (0:00:00.167) 0:00:25.652 *** 2025-09-10 00:42:34.755685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:34.755711 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:34.755719 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:34.755727 | orchestrator | 2025-09-10 00:42:34.755735 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-10 00:42:34.755742 | orchestrator | Wednesday 10 September 2025 00:42:31 +0000 (0:00:00.398) 0:00:26.051 *** 2025-09-10 00:42:34.755750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:34.755758 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:34.755766 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:34.755774 | orchestrator | 2025-09-10 00:42:34.755781 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-10 00:42:34.755789 | orchestrator | Wednesday 10 September 2025 00:42:31 +0000 (0:00:00.157) 0:00:26.208 *** 2025-09-10 00:42:34.755798 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'})  2025-09-10 00:42:34.755806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'})  2025-09-10 00:42:34.755814 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:42:34.755823 | orchestrator | 2025-09-10 00:42:34.755832 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-10 00:42:34.755840 | orchestrator | Wednesday 10 September 2025 00:42:31 +0000 (0:00:00.144) 0:00:26.352 *** 2025-09-10 00:42:34.755849 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 00:42:34.755858 | orchestrator |  "lvm_report": { 2025-09-10 00:42:34.755868 | orchestrator |  "lv": [ 2025-09-10 00:42:34.755877 | orchestrator |  { 2025-09-10 00:42:34.755900 | orchestrator |  "lv_name": "osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f", 2025-09-10 00:42:34.755911 | orchestrator |  "vg_name": "ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f" 2025-09-10 00:42:34.755920 | orchestrator |  }, 2025-09-10 00:42:34.755929 | orchestrator |  { 2025-09-10 00:42:34.755937 | orchestrator |  "lv_name": "osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea", 2025-09-10 00:42:34.755946 | orchestrator |  "vg_name": "ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea" 2025-09-10 00:42:34.755955 | orchestrator |  } 2025-09-10 00:42:34.755963 | orchestrator |  ], 2025-09-10 00:42:34.755973 | orchestrator |  "pv": [ 2025-09-10 00:42:34.755981 | orchestrator |  { 2025-09-10 00:42:34.755991 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-10 00:42:34.755999 | orchestrator |  "vg_name": "ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea" 2025-09-10 00:42:34.756007 | orchestrator |  }, 2025-09-10 00:42:34.756015 | orchestrator |  { 2025-09-10 00:42:34.756022 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-10 00:42:34.756030 | orchestrator |  "vg_name": "ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f" 2025-09-10 00:42:34.756038 | orchestrator |  } 2025-09-10 00:42:34.756045 | orchestrator |  ] 2025-09-10 00:42:34.756053 | orchestrator |  } 2025-09-10 00:42:34.756062 | orchestrator | } 2025-09-10 00:42:34.756069 | orchestrator | 2025-09-10 00:42:34.756077 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-10 00:42:34.756085 | orchestrator | 2025-09-10 00:42:34.756093 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-10 00:42:34.756101 | orchestrator | Wednesday 10 September 2025 00:42:31 +0000 (0:00:00.286) 0:00:26.639 *** 2025-09-10 00:42:34.756108 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-10 00:42:34.756123 | orchestrator | 2025-09-10 00:42:34.756131 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-10 00:42:34.756139 | orchestrator | Wednesday 10 September 2025 00:42:32 +0000 (0:00:00.232) 0:00:26.872 *** 2025-09-10 00:42:34.756146 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:34.756154 | orchestrator | 2025-09-10 00:42:34.756162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756170 | orchestrator | Wednesday 10 September 2025 00:42:32 +0000 (0:00:00.217) 0:00:27.089 *** 2025-09-10 00:42:34.756192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-10 00:42:34.756200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-10 00:42:34.756208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-10 00:42:34.756215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-10 00:42:34.756223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-10 00:42:34.756231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-10 00:42:34.756239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-10 00:42:34.756250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-10 00:42:34.756258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-10 00:42:34.756266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-10 00:42:34.756273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-10 00:42:34.756281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-10 00:42:34.756289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-10 00:42:34.756297 | orchestrator | 2025-09-10 00:42:34.756304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756312 | orchestrator | Wednesday 10 September 2025 00:42:32 +0000 (0:00:00.393) 0:00:27.482 *** 2025-09-10 00:42:34.756319 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756327 | orchestrator | 2025-09-10 00:42:34.756335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756342 | orchestrator | Wednesday 10 September 2025 00:42:33 +0000 (0:00:00.223) 0:00:27.706 *** 2025-09-10 00:42:34.756350 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756358 | orchestrator | 2025-09-10 00:42:34.756365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756373 | orchestrator | Wednesday 10 September 2025 00:42:33 +0000 (0:00:00.202) 0:00:27.908 *** 2025-09-10 00:42:34.756381 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756388 | orchestrator | 2025-09-10 00:42:34.756396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756403 | orchestrator | Wednesday 10 September 2025 00:42:33 +0000 (0:00:00.712) 0:00:28.621 *** 2025-09-10 00:42:34.756411 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756419 | orchestrator | 2025-09-10 00:42:34.756426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756434 | orchestrator | Wednesday 10 September 2025 00:42:34 +0000 (0:00:00.186) 0:00:28.808 *** 2025-09-10 00:42:34.756441 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756449 | orchestrator | 2025-09-10 00:42:34.756457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756464 | orchestrator | Wednesday 10 September 2025 00:42:34 +0000 (0:00:00.205) 0:00:29.014 *** 2025-09-10 00:42:34.756472 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756480 | orchestrator | 2025-09-10 00:42:34.756493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:34.756501 | orchestrator | Wednesday 10 September 2025 00:42:34 +0000 (0:00:00.203) 0:00:29.218 *** 2025-09-10 00:42:34.756509 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:34.756517 | orchestrator | 2025-09-10 00:42:34.756529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:45.090459 | orchestrator | Wednesday 10 September 2025 00:42:34 +0000 (0:00:00.198) 0:00:29.416 *** 2025-09-10 00:42:45.090563 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.090580 | orchestrator | 2025-09-10 00:42:45.090636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:45.090650 | orchestrator | Wednesday 10 September 2025 00:42:34 +0000 (0:00:00.196) 0:00:29.612 *** 2025-09-10 00:42:45.090662 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067) 2025-09-10 00:42:45.090674 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067) 2025-09-10 00:42:45.090685 | orchestrator | 2025-09-10 00:42:45.090697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:45.090708 | orchestrator | Wednesday 10 September 2025 00:42:35 +0000 (0:00:00.413) 0:00:30.026 *** 2025-09-10 00:42:45.090719 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00) 2025-09-10 00:42:45.090729 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00) 2025-09-10 00:42:45.090740 | orchestrator | 2025-09-10 00:42:45.090751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:45.090762 | orchestrator | Wednesday 10 September 2025 00:42:35 +0000 (0:00:00.444) 0:00:30.470 *** 2025-09-10 00:42:45.090773 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e) 2025-09-10 00:42:45.090783 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e) 2025-09-10 00:42:45.090794 | orchestrator | 2025-09-10 00:42:45.090805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:45.090816 | orchestrator | Wednesday 10 September 2025 00:42:36 +0000 (0:00:00.444) 0:00:30.914 *** 2025-09-10 00:42:45.090826 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb) 2025-09-10 00:42:45.090837 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb) 2025-09-10 00:42:45.090848 | orchestrator | 2025-09-10 00:42:45.090859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:42:45.090870 | orchestrator | Wednesday 10 September 2025 00:42:36 +0000 (0:00:00.482) 0:00:31.397 *** 2025-09-10 00:42:45.090880 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-10 00:42:45.090891 | orchestrator | 2025-09-10 00:42:45.090902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.090913 | orchestrator | Wednesday 10 September 2025 00:42:37 +0000 (0:00:00.322) 0:00:31.719 *** 2025-09-10 00:42:45.090924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-10 00:42:45.090952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-10 00:42:45.090965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-10 00:42:45.090978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-10 00:42:45.090991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-10 00:42:45.091004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-10 00:42:45.091017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-10 00:42:45.091052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-10 00:42:45.091066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-10 00:42:45.091077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-10 00:42:45.091090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-10 00:42:45.091103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-10 00:42:45.091116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-10 00:42:45.091128 | orchestrator | 2025-09-10 00:42:45.091139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091150 | orchestrator | Wednesday 10 September 2025 00:42:37 +0000 (0:00:00.620) 0:00:32.340 *** 2025-09-10 00:42:45.091160 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091171 | orchestrator | 2025-09-10 00:42:45.091182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091193 | orchestrator | Wednesday 10 September 2025 00:42:37 +0000 (0:00:00.204) 0:00:32.544 *** 2025-09-10 00:42:45.091203 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091214 | orchestrator | 2025-09-10 00:42:45.091225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091236 | orchestrator | Wednesday 10 September 2025 00:42:38 +0000 (0:00:00.203) 0:00:32.748 *** 2025-09-10 00:42:45.091247 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091257 | orchestrator | 2025-09-10 00:42:45.091268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091279 | orchestrator | Wednesday 10 September 2025 00:42:38 +0000 (0:00:00.217) 0:00:32.965 *** 2025-09-10 00:42:45.091290 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091300 | orchestrator | 2025-09-10 00:42:45.091329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091341 | orchestrator | Wednesday 10 September 2025 00:42:38 +0000 (0:00:00.220) 0:00:33.186 *** 2025-09-10 00:42:45.091352 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091362 | orchestrator | 2025-09-10 00:42:45.091373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091384 | orchestrator | Wednesday 10 September 2025 00:42:38 +0000 (0:00:00.213) 0:00:33.400 *** 2025-09-10 00:42:45.091395 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091406 | orchestrator | 2025-09-10 00:42:45.091416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091427 | orchestrator | Wednesday 10 September 2025 00:42:38 +0000 (0:00:00.216) 0:00:33.617 *** 2025-09-10 00:42:45.091438 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091448 | orchestrator | 2025-09-10 00:42:45.091459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091470 | orchestrator | Wednesday 10 September 2025 00:42:39 +0000 (0:00:00.183) 0:00:33.800 *** 2025-09-10 00:42:45.091480 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091491 | orchestrator | 2025-09-10 00:42:45.091502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091513 | orchestrator | Wednesday 10 September 2025 00:42:39 +0000 (0:00:00.194) 0:00:33.995 *** 2025-09-10 00:42:45.091523 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-10 00:42:45.091534 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-10 00:42:45.091545 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-10 00:42:45.091555 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-10 00:42:45.091566 | orchestrator | 2025-09-10 00:42:45.091578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091589 | orchestrator | Wednesday 10 September 2025 00:42:40 +0000 (0:00:00.843) 0:00:34.839 *** 2025-09-10 00:42:45.091626 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091637 | orchestrator | 2025-09-10 00:42:45.091648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091659 | orchestrator | Wednesday 10 September 2025 00:42:40 +0000 (0:00:00.238) 0:00:35.077 *** 2025-09-10 00:42:45.091669 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091680 | orchestrator | 2025-09-10 00:42:45.091691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091702 | orchestrator | Wednesday 10 September 2025 00:42:40 +0000 (0:00:00.184) 0:00:35.262 *** 2025-09-10 00:42:45.091712 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091723 | orchestrator | 2025-09-10 00:42:45.091734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:42:45.091744 | orchestrator | Wednesday 10 September 2025 00:42:41 +0000 (0:00:00.645) 0:00:35.907 *** 2025-09-10 00:42:45.091755 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091766 | orchestrator | 2025-09-10 00:42:45.091776 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-10 00:42:45.091787 | orchestrator | Wednesday 10 September 2025 00:42:41 +0000 (0:00:00.228) 0:00:36.135 *** 2025-09-10 00:42:45.091798 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091809 | orchestrator | 2025-09-10 00:42:45.091820 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-10 00:42:45.091831 | orchestrator | Wednesday 10 September 2025 00:42:41 +0000 (0:00:00.127) 0:00:36.263 *** 2025-09-10 00:42:45.091841 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '20419d67-2a88-5ee6-832e-dd0a34a7687a'}}) 2025-09-10 00:42:45.091853 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '28e77ae9-929e-5c68-8a2a-91f3bea00aca'}}) 2025-09-10 00:42:45.091863 | orchestrator | 2025-09-10 00:42:45.091874 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-10 00:42:45.091885 | orchestrator | Wednesday 10 September 2025 00:42:41 +0000 (0:00:00.193) 0:00:36.456 *** 2025-09-10 00:42:45.091896 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'}) 2025-09-10 00:42:45.091908 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'}) 2025-09-10 00:42:45.091918 | orchestrator | 2025-09-10 00:42:45.091929 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-10 00:42:45.091940 | orchestrator | Wednesday 10 September 2025 00:42:43 +0000 (0:00:01.866) 0:00:38.322 *** 2025-09-10 00:42:45.091950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:45.091963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:45.091974 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:45.091984 | orchestrator | 2025-09-10 00:42:45.091995 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-10 00:42:45.092006 | orchestrator | Wednesday 10 September 2025 00:42:43 +0000 (0:00:00.149) 0:00:38.472 *** 2025-09-10 00:42:45.092016 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'}) 2025-09-10 00:42:45.092027 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'}) 2025-09-10 00:42:45.092038 | orchestrator | 2025-09-10 00:42:45.092056 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-10 00:42:50.836128 | orchestrator | Wednesday 10 September 2025 00:42:45 +0000 (0:00:01.276) 0:00:39.749 *** 2025-09-10 00:42:50.836266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836297 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836309 | orchestrator | 2025-09-10 00:42:50.836321 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-10 00:42:50.836333 | orchestrator | Wednesday 10 September 2025 00:42:45 +0000 (0:00:00.158) 0:00:39.907 *** 2025-09-10 00:42:50.836344 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836355 | orchestrator | 2025-09-10 00:42:50.836365 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-10 00:42:50.836377 | orchestrator | Wednesday 10 September 2025 00:42:45 +0000 (0:00:00.139) 0:00:40.047 *** 2025-09-10 00:42:50.836388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836428 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836438 | orchestrator | 2025-09-10 00:42:50.836449 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-10 00:42:50.836460 | orchestrator | Wednesday 10 September 2025 00:42:45 +0000 (0:00:00.150) 0:00:40.198 *** 2025-09-10 00:42:50.836471 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836481 | orchestrator | 2025-09-10 00:42:50.836492 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-10 00:42:50.836503 | orchestrator | Wednesday 10 September 2025 00:42:45 +0000 (0:00:00.132) 0:00:40.330 *** 2025-09-10 00:42:50.836513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836535 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836546 | orchestrator | 2025-09-10 00:42:50.836556 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-10 00:42:50.836567 | orchestrator | Wednesday 10 September 2025 00:42:45 +0000 (0:00:00.146) 0:00:40.477 *** 2025-09-10 00:42:50.836583 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836631 | orchestrator | 2025-09-10 00:42:50.836642 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-10 00:42:50.836655 | orchestrator | Wednesday 10 September 2025 00:42:46 +0000 (0:00:00.369) 0:00:40.847 *** 2025-09-10 00:42:50.836667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836692 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836705 | orchestrator | 2025-09-10 00:42:50.836717 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-10 00:42:50.836729 | orchestrator | Wednesday 10 September 2025 00:42:46 +0000 (0:00:00.161) 0:00:41.008 *** 2025-09-10 00:42:50.836741 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:50.836754 | orchestrator | 2025-09-10 00:42:50.836766 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-10 00:42:50.836778 | orchestrator | Wednesday 10 September 2025 00:42:46 +0000 (0:00:00.154) 0:00:41.163 *** 2025-09-10 00:42:50.836800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836813 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836825 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836837 | orchestrator | 2025-09-10 00:42:50.836850 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-10 00:42:50.836863 | orchestrator | Wednesday 10 September 2025 00:42:46 +0000 (0:00:00.166) 0:00:41.329 *** 2025-09-10 00:42:50.836874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836899 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.836911 | orchestrator | 2025-09-10 00:42:50.836924 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-10 00:42:50.836936 | orchestrator | Wednesday 10 September 2025 00:42:46 +0000 (0:00:00.151) 0:00:41.481 *** 2025-09-10 00:42:50.836966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:50.836980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:50.836992 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837004 | orchestrator | 2025-09-10 00:42:50.837015 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-10 00:42:50.837026 | orchestrator | Wednesday 10 September 2025 00:42:46 +0000 (0:00:00.170) 0:00:41.651 *** 2025-09-10 00:42:50.837037 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837047 | orchestrator | 2025-09-10 00:42:50.837058 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-10 00:42:50.837068 | orchestrator | Wednesday 10 September 2025 00:42:47 +0000 (0:00:00.145) 0:00:41.796 *** 2025-09-10 00:42:50.837079 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837089 | orchestrator | 2025-09-10 00:42:50.837100 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-10 00:42:50.837110 | orchestrator | Wednesday 10 September 2025 00:42:47 +0000 (0:00:00.137) 0:00:41.933 *** 2025-09-10 00:42:50.837121 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837131 | orchestrator | 2025-09-10 00:42:50.837142 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-10 00:42:50.837152 | orchestrator | Wednesday 10 September 2025 00:42:47 +0000 (0:00:00.161) 0:00:42.095 *** 2025-09-10 00:42:50.837163 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 00:42:50.837174 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-10 00:42:50.837185 | orchestrator | } 2025-09-10 00:42:50.837195 | orchestrator | 2025-09-10 00:42:50.837206 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-10 00:42:50.837217 | orchestrator | Wednesday 10 September 2025 00:42:47 +0000 (0:00:00.168) 0:00:42.263 *** 2025-09-10 00:42:50.837227 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 00:42:50.837238 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-10 00:42:50.837248 | orchestrator | } 2025-09-10 00:42:50.837259 | orchestrator | 2025-09-10 00:42:50.837269 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-10 00:42:50.837280 | orchestrator | Wednesday 10 September 2025 00:42:47 +0000 (0:00:00.156) 0:00:42.419 *** 2025-09-10 00:42:50.837291 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 00:42:50.837301 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-10 00:42:50.837319 | orchestrator | } 2025-09-10 00:42:50.837330 | orchestrator | 2025-09-10 00:42:50.837341 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-10 00:42:50.837351 | orchestrator | Wednesday 10 September 2025 00:42:47 +0000 (0:00:00.129) 0:00:42.549 *** 2025-09-10 00:42:50.837362 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:50.837372 | orchestrator | 2025-09-10 00:42:50.837383 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-10 00:42:50.837393 | orchestrator | Wednesday 10 September 2025 00:42:48 +0000 (0:00:00.729) 0:00:43.278 *** 2025-09-10 00:42:50.837410 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:50.837421 | orchestrator | 2025-09-10 00:42:50.837431 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-10 00:42:50.837442 | orchestrator | Wednesday 10 September 2025 00:42:49 +0000 (0:00:00.566) 0:00:43.845 *** 2025-09-10 00:42:50.837453 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:50.837463 | orchestrator | 2025-09-10 00:42:50.837474 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-10 00:42:50.837484 | orchestrator | Wednesday 10 September 2025 00:42:49 +0000 (0:00:00.543) 0:00:44.389 *** 2025-09-10 00:42:50.837495 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:50.837506 | orchestrator | 2025-09-10 00:42:50.837516 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-10 00:42:50.837527 | orchestrator | Wednesday 10 September 2025 00:42:49 +0000 (0:00:00.148) 0:00:44.538 *** 2025-09-10 00:42:50.837537 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837548 | orchestrator | 2025-09-10 00:42:50.837558 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-10 00:42:50.837569 | orchestrator | Wednesday 10 September 2025 00:42:49 +0000 (0:00:00.127) 0:00:44.665 *** 2025-09-10 00:42:50.837580 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837607 | orchestrator | 2025-09-10 00:42:50.837618 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-10 00:42:50.837629 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.102) 0:00:44.767 *** 2025-09-10 00:42:50.837639 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 00:42:50.837650 | orchestrator |  "vgs_report": { 2025-09-10 00:42:50.837661 | orchestrator |  "vg": [] 2025-09-10 00:42:50.837672 | orchestrator |  } 2025-09-10 00:42:50.837683 | orchestrator | } 2025-09-10 00:42:50.837694 | orchestrator | 2025-09-10 00:42:50.837705 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-10 00:42:50.837715 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.143) 0:00:44.911 *** 2025-09-10 00:42:50.837727 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837737 | orchestrator | 2025-09-10 00:42:50.837748 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-10 00:42:50.837759 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.150) 0:00:45.062 *** 2025-09-10 00:42:50.837769 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837780 | orchestrator | 2025-09-10 00:42:50.837791 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-10 00:42:50.837802 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.139) 0:00:45.201 *** 2025-09-10 00:42:50.837812 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837823 | orchestrator | 2025-09-10 00:42:50.837834 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-10 00:42:50.837845 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.149) 0:00:45.351 *** 2025-09-10 00:42:50.837855 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:50.837866 | orchestrator | 2025-09-10 00:42:50.837878 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-10 00:42:50.837894 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.141) 0:00:45.492 *** 2025-09-10 00:42:55.809247 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809336 | orchestrator | 2025-09-10 00:42:55.809371 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-10 00:42:55.809384 | orchestrator | Wednesday 10 September 2025 00:42:50 +0000 (0:00:00.163) 0:00:45.656 *** 2025-09-10 00:42:55.809395 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809405 | orchestrator | 2025-09-10 00:42:55.809416 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-10 00:42:55.809427 | orchestrator | Wednesday 10 September 2025 00:42:51 +0000 (0:00:00.383) 0:00:46.040 *** 2025-09-10 00:42:55.809438 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809448 | orchestrator | 2025-09-10 00:42:55.809459 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-10 00:42:55.809470 | orchestrator | Wednesday 10 September 2025 00:42:51 +0000 (0:00:00.151) 0:00:46.191 *** 2025-09-10 00:42:55.809480 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809491 | orchestrator | 2025-09-10 00:42:55.809501 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-10 00:42:55.809511 | orchestrator | Wednesday 10 September 2025 00:42:51 +0000 (0:00:00.122) 0:00:46.313 *** 2025-09-10 00:42:55.809522 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809532 | orchestrator | 2025-09-10 00:42:55.809543 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-10 00:42:55.809553 | orchestrator | Wednesday 10 September 2025 00:42:51 +0000 (0:00:00.142) 0:00:46.456 *** 2025-09-10 00:42:55.809564 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809574 | orchestrator | 2025-09-10 00:42:55.809585 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-10 00:42:55.809635 | orchestrator | Wednesday 10 September 2025 00:42:51 +0000 (0:00:00.144) 0:00:46.601 *** 2025-09-10 00:42:55.809647 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809657 | orchestrator | 2025-09-10 00:42:55.809668 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-10 00:42:55.809678 | orchestrator | Wednesday 10 September 2025 00:42:52 +0000 (0:00:00.174) 0:00:46.775 *** 2025-09-10 00:42:55.809689 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809700 | orchestrator | 2025-09-10 00:42:55.809710 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-10 00:42:55.809721 | orchestrator | Wednesday 10 September 2025 00:42:52 +0000 (0:00:00.132) 0:00:46.907 *** 2025-09-10 00:42:55.809732 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809743 | orchestrator | 2025-09-10 00:42:55.809753 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-10 00:42:55.809764 | orchestrator | Wednesday 10 September 2025 00:42:52 +0000 (0:00:00.139) 0:00:47.047 *** 2025-09-10 00:42:55.809774 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809785 | orchestrator | 2025-09-10 00:42:55.809798 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-10 00:42:55.809810 | orchestrator | Wednesday 10 September 2025 00:42:52 +0000 (0:00:00.183) 0:00:47.231 *** 2025-09-10 00:42:55.809838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.809852 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.809865 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809878 | orchestrator | 2025-09-10 00:42:55.809891 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-10 00:42:55.809904 | orchestrator | Wednesday 10 September 2025 00:42:52 +0000 (0:00:00.172) 0:00:47.404 *** 2025-09-10 00:42:55.809917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.809930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.809951 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.809964 | orchestrator | 2025-09-10 00:42:55.809976 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-10 00:42:55.809988 | orchestrator | Wednesday 10 September 2025 00:42:52 +0000 (0:00:00.148) 0:00:47.552 *** 2025-09-10 00:42:55.810001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810077 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810089 | orchestrator | 2025-09-10 00:42:55.810102 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-10 00:42:55.810115 | orchestrator | Wednesday 10 September 2025 00:42:53 +0000 (0:00:00.156) 0:00:47.708 *** 2025-09-10 00:42:55.810127 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810139 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810151 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810162 | orchestrator | 2025-09-10 00:42:55.810173 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-10 00:42:55.810201 | orchestrator | Wednesday 10 September 2025 00:42:53 +0000 (0:00:00.386) 0:00:48.095 *** 2025-09-10 00:42:55.810212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810234 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810245 | orchestrator | 2025-09-10 00:42:55.810256 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-10 00:42:55.810266 | orchestrator | Wednesday 10 September 2025 00:42:53 +0000 (0:00:00.162) 0:00:48.257 *** 2025-09-10 00:42:55.810277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810298 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810309 | orchestrator | 2025-09-10 00:42:55.810320 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-10 00:42:55.810331 | orchestrator | Wednesday 10 September 2025 00:42:53 +0000 (0:00:00.168) 0:00:48.425 *** 2025-09-10 00:42:55.810341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810362 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810373 | orchestrator | 2025-09-10 00:42:55.810383 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-10 00:42:55.810394 | orchestrator | Wednesday 10 September 2025 00:42:53 +0000 (0:00:00.153) 0:00:48.578 *** 2025-09-10 00:42:55.810404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810433 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810444 | orchestrator | 2025-09-10 00:42:55.810455 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-10 00:42:55.810499 | orchestrator | Wednesday 10 September 2025 00:42:54 +0000 (0:00:00.146) 0:00:48.725 *** 2025-09-10 00:42:55.810511 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:55.810522 | orchestrator | 2025-09-10 00:42:55.810532 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-10 00:42:55.810544 | orchestrator | Wednesday 10 September 2025 00:42:54 +0000 (0:00:00.550) 0:00:49.276 *** 2025-09-10 00:42:55.810554 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:55.810565 | orchestrator | 2025-09-10 00:42:55.810576 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-10 00:42:55.810587 | orchestrator | Wednesday 10 September 2025 00:42:55 +0000 (0:00:00.498) 0:00:49.774 *** 2025-09-10 00:42:55.810615 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:42:55.810626 | orchestrator | 2025-09-10 00:42:55.810637 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-10 00:42:55.810648 | orchestrator | Wednesday 10 September 2025 00:42:55 +0000 (0:00:00.152) 0:00:49.927 *** 2025-09-10 00:42:55.810658 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'vg_name': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'}) 2025-09-10 00:42:55.810670 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'vg_name': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'}) 2025-09-10 00:42:55.810680 | orchestrator | 2025-09-10 00:42:55.810691 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-10 00:42:55.810702 | orchestrator | Wednesday 10 September 2025 00:42:55 +0000 (0:00:00.201) 0:00:50.128 *** 2025-09-10 00:42:55.810712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810734 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:42:55.810744 | orchestrator | 2025-09-10 00:42:55.810755 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-10 00:42:55.810765 | orchestrator | Wednesday 10 September 2025 00:42:55 +0000 (0:00:00.179) 0:00:50.307 *** 2025-09-10 00:42:55.810776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:42:55.810786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:42:55.810804 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:43:02.011917 | orchestrator | 2025-09-10 00:43:02.012036 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-10 00:43:02.012053 | orchestrator | Wednesday 10 September 2025 00:42:55 +0000 (0:00:00.159) 0:00:50.467 *** 2025-09-10 00:43:02.012067 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'})  2025-09-10 00:43:02.012081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'})  2025-09-10 00:43:02.012092 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:43:02.012104 | orchestrator | 2025-09-10 00:43:02.012115 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-10 00:43:02.012126 | orchestrator | Wednesday 10 September 2025 00:42:55 +0000 (0:00:00.174) 0:00:50.642 *** 2025-09-10 00:43:02.012158 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 00:43:02.012170 | orchestrator |  "lvm_report": { 2025-09-10 00:43:02.012182 | orchestrator |  "lv": [ 2025-09-10 00:43:02.012194 | orchestrator |  { 2025-09-10 00:43:02.012205 | orchestrator |  "lv_name": "osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a", 2025-09-10 00:43:02.012217 | orchestrator |  "vg_name": "ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a" 2025-09-10 00:43:02.012228 | orchestrator |  }, 2025-09-10 00:43:02.012239 | orchestrator |  { 2025-09-10 00:43:02.012250 | orchestrator |  "lv_name": "osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca", 2025-09-10 00:43:02.012261 | orchestrator |  "vg_name": "ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca" 2025-09-10 00:43:02.012271 | orchestrator |  } 2025-09-10 00:43:02.012282 | orchestrator |  ], 2025-09-10 00:43:02.012293 | orchestrator |  "pv": [ 2025-09-10 00:43:02.012304 | orchestrator |  { 2025-09-10 00:43:02.012315 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-10 00:43:02.012326 | orchestrator |  "vg_name": "ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a" 2025-09-10 00:43:02.012337 | orchestrator |  }, 2025-09-10 00:43:02.012347 | orchestrator |  { 2025-09-10 00:43:02.012358 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-10 00:43:02.012369 | orchestrator |  "vg_name": "ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca" 2025-09-10 00:43:02.012380 | orchestrator |  } 2025-09-10 00:43:02.012391 | orchestrator |  ] 2025-09-10 00:43:02.012402 | orchestrator |  } 2025-09-10 00:43:02.012412 | orchestrator | } 2025-09-10 00:43:02.012423 | orchestrator | 2025-09-10 00:43:02.012434 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-10 00:43:02.012447 | orchestrator | 2025-09-10 00:43:02.012460 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-10 00:43:02.012473 | orchestrator | Wednesday 10 September 2025 00:42:56 +0000 (0:00:00.482) 0:00:51.124 *** 2025-09-10 00:43:02.012486 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-10 00:43:02.012500 | orchestrator | 2025-09-10 00:43:02.012529 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-10 00:43:02.012542 | orchestrator | Wednesday 10 September 2025 00:42:56 +0000 (0:00:00.268) 0:00:51.392 *** 2025-09-10 00:43:02.012555 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:02.012568 | orchestrator | 2025-09-10 00:43:02.012581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.012626 | orchestrator | Wednesday 10 September 2025 00:42:57 +0000 (0:00:00.307) 0:00:51.699 *** 2025-09-10 00:43:02.012640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-10 00:43:02.012653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-10 00:43:02.012666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-10 00:43:02.012679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-10 00:43:02.012691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-10 00:43:02.012705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-10 00:43:02.012718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-10 00:43:02.012731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-10 00:43:02.012744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-10 00:43:02.012757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-10 00:43:02.012769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-10 00:43:02.012792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-10 00:43:02.012804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-10 00:43:02.012815 | orchestrator | 2025-09-10 00:43:02.012825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.012836 | orchestrator | Wednesday 10 September 2025 00:42:57 +0000 (0:00:00.434) 0:00:52.134 *** 2025-09-10 00:43:02.012847 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.012863 | orchestrator | 2025-09-10 00:43:02.012874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.012885 | orchestrator | Wednesday 10 September 2025 00:42:57 +0000 (0:00:00.192) 0:00:52.327 *** 2025-09-10 00:43:02.012896 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.012906 | orchestrator | 2025-09-10 00:43:02.012917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.012947 | orchestrator | Wednesday 10 September 2025 00:42:57 +0000 (0:00:00.199) 0:00:52.527 *** 2025-09-10 00:43:02.012959 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.012969 | orchestrator | 2025-09-10 00:43:02.012980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.012991 | orchestrator | Wednesday 10 September 2025 00:42:58 +0000 (0:00:00.208) 0:00:52.735 *** 2025-09-10 00:43:02.013001 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.013012 | orchestrator | 2025-09-10 00:43:02.013023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013034 | orchestrator | Wednesday 10 September 2025 00:42:58 +0000 (0:00:00.198) 0:00:52.934 *** 2025-09-10 00:43:02.013045 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.013055 | orchestrator | 2025-09-10 00:43:02.013066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013077 | orchestrator | Wednesday 10 September 2025 00:42:58 +0000 (0:00:00.216) 0:00:53.150 *** 2025-09-10 00:43:02.013088 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.013098 | orchestrator | 2025-09-10 00:43:02.013109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013120 | orchestrator | Wednesday 10 September 2025 00:42:59 +0000 (0:00:00.582) 0:00:53.733 *** 2025-09-10 00:43:02.013130 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.013141 | orchestrator | 2025-09-10 00:43:02.013152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013163 | orchestrator | Wednesday 10 September 2025 00:42:59 +0000 (0:00:00.207) 0:00:53.940 *** 2025-09-10 00:43:02.013173 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:02.013184 | orchestrator | 2025-09-10 00:43:02.013194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013205 | orchestrator | Wednesday 10 September 2025 00:42:59 +0000 (0:00:00.193) 0:00:54.134 *** 2025-09-10 00:43:02.013216 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de) 2025-09-10 00:43:02.013228 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de) 2025-09-10 00:43:02.013239 | orchestrator | 2025-09-10 00:43:02.013249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013260 | orchestrator | Wednesday 10 September 2025 00:42:59 +0000 (0:00:00.445) 0:00:54.580 *** 2025-09-10 00:43:02.013271 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c) 2025-09-10 00:43:02.013281 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c) 2025-09-10 00:43:02.013292 | orchestrator | 2025-09-10 00:43:02.013302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013313 | orchestrator | Wednesday 10 September 2025 00:43:00 +0000 (0:00:00.437) 0:00:55.017 *** 2025-09-10 00:43:02.013336 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901) 2025-09-10 00:43:02.013347 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901) 2025-09-10 00:43:02.013358 | orchestrator | 2025-09-10 00:43:02.013369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013380 | orchestrator | Wednesday 10 September 2025 00:43:00 +0000 (0:00:00.453) 0:00:55.471 *** 2025-09-10 00:43:02.013390 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c) 2025-09-10 00:43:02.013401 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c) 2025-09-10 00:43:02.013412 | orchestrator | 2025-09-10 00:43:02.013422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-10 00:43:02.013433 | orchestrator | Wednesday 10 September 2025 00:43:01 +0000 (0:00:00.433) 0:00:55.904 *** 2025-09-10 00:43:02.013443 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-10 00:43:02.013454 | orchestrator | 2025-09-10 00:43:02.013465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:02.013476 | orchestrator | Wednesday 10 September 2025 00:43:01 +0000 (0:00:00.318) 0:00:56.222 *** 2025-09-10 00:43:02.013486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-10 00:43:02.013497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-10 00:43:02.013507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-10 00:43:02.013518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-10 00:43:02.013528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-10 00:43:02.013539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-10 00:43:02.013550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-10 00:43:02.013560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-10 00:43:02.013571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-10 00:43:02.013581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-10 00:43:02.013608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-10 00:43:02.013627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-10 00:43:11.242696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-10 00:43:11.242827 | orchestrator | 2025-09-10 00:43:11.242853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.242874 | orchestrator | Wednesday 10 September 2025 00:43:01 +0000 (0:00:00.440) 0:00:56.663 *** 2025-09-10 00:43:11.242892 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.242911 | orchestrator | 2025-09-10 00:43:11.242929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.242947 | orchestrator | Wednesday 10 September 2025 00:43:02 +0000 (0:00:00.189) 0:00:56.852 *** 2025-09-10 00:43:11.242963 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.242980 | orchestrator | 2025-09-10 00:43:11.242997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243017 | orchestrator | Wednesday 10 September 2025 00:43:02 +0000 (0:00:00.197) 0:00:57.050 *** 2025-09-10 00:43:11.243036 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243054 | orchestrator | 2025-09-10 00:43:11.243073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243116 | orchestrator | Wednesday 10 September 2025 00:43:03 +0000 (0:00:00.643) 0:00:57.693 *** 2025-09-10 00:43:11.243133 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243150 | orchestrator | 2025-09-10 00:43:11.243167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243187 | orchestrator | Wednesday 10 September 2025 00:43:03 +0000 (0:00:00.209) 0:00:57.903 *** 2025-09-10 00:43:11.243204 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243224 | orchestrator | 2025-09-10 00:43:11.243242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243262 | orchestrator | Wednesday 10 September 2025 00:43:03 +0000 (0:00:00.201) 0:00:58.104 *** 2025-09-10 00:43:11.243284 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243305 | orchestrator | 2025-09-10 00:43:11.243326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243346 | orchestrator | Wednesday 10 September 2025 00:43:03 +0000 (0:00:00.244) 0:00:58.349 *** 2025-09-10 00:43:11.243358 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243370 | orchestrator | 2025-09-10 00:43:11.243383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243395 | orchestrator | Wednesday 10 September 2025 00:43:03 +0000 (0:00:00.260) 0:00:58.609 *** 2025-09-10 00:43:11.243407 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243419 | orchestrator | 2025-09-10 00:43:11.243431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243443 | orchestrator | Wednesday 10 September 2025 00:43:04 +0000 (0:00:00.236) 0:00:58.845 *** 2025-09-10 00:43:11.243455 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-10 00:43:11.243468 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-10 00:43:11.243481 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-10 00:43:11.243493 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-10 00:43:11.243506 | orchestrator | 2025-09-10 00:43:11.243519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243530 | orchestrator | Wednesday 10 September 2025 00:43:04 +0000 (0:00:00.684) 0:00:59.530 *** 2025-09-10 00:43:11.243541 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243551 | orchestrator | 2025-09-10 00:43:11.243561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243572 | orchestrator | Wednesday 10 September 2025 00:43:05 +0000 (0:00:00.206) 0:00:59.736 *** 2025-09-10 00:43:11.243582 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243629 | orchestrator | 2025-09-10 00:43:11.243649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243666 | orchestrator | Wednesday 10 September 2025 00:43:05 +0000 (0:00:00.198) 0:00:59.934 *** 2025-09-10 00:43:11.243684 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243700 | orchestrator | 2025-09-10 00:43:11.243716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-10 00:43:11.243732 | orchestrator | Wednesday 10 September 2025 00:43:05 +0000 (0:00:00.246) 0:01:00.181 *** 2025-09-10 00:43:11.243747 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243761 | orchestrator | 2025-09-10 00:43:11.243777 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-10 00:43:11.243794 | orchestrator | Wednesday 10 September 2025 00:43:05 +0000 (0:00:00.235) 0:01:00.417 *** 2025-09-10 00:43:11.243810 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.243827 | orchestrator | 2025-09-10 00:43:11.243842 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-10 00:43:11.243858 | orchestrator | Wednesday 10 September 2025 00:43:06 +0000 (0:00:00.387) 0:01:00.804 *** 2025-09-10 00:43:11.243874 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '36dac960-67a7-54a4-bbd2-b6f8976b18f7'}}) 2025-09-10 00:43:11.243890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f4115e81-926e-57fb-8145-65084efa4466'}}) 2025-09-10 00:43:11.243922 | orchestrator | 2025-09-10 00:43:11.243938 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-10 00:43:11.243953 | orchestrator | Wednesday 10 September 2025 00:43:06 +0000 (0:00:00.238) 0:01:01.043 *** 2025-09-10 00:43:11.243971 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'}) 2025-09-10 00:43:11.243989 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'}) 2025-09-10 00:43:11.244003 | orchestrator | 2025-09-10 00:43:11.244019 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-10 00:43:11.244062 | orchestrator | Wednesday 10 September 2025 00:43:08 +0000 (0:00:01.831) 0:01:02.875 *** 2025-09-10 00:43:11.244080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:11.244096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:11.244111 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244127 | orchestrator | 2025-09-10 00:43:11.244143 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-10 00:43:11.244159 | orchestrator | Wednesday 10 September 2025 00:43:08 +0000 (0:00:00.154) 0:01:03.029 *** 2025-09-10 00:43:11.244174 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'}) 2025-09-10 00:43:11.244211 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'}) 2025-09-10 00:43:11.244229 | orchestrator | 2025-09-10 00:43:11.244246 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-10 00:43:11.244261 | orchestrator | Wednesday 10 September 2025 00:43:09 +0000 (0:00:01.275) 0:01:04.305 *** 2025-09-10 00:43:11.244277 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:11.244293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:11.244310 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244327 | orchestrator | 2025-09-10 00:43:11.244343 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-10 00:43:11.244360 | orchestrator | Wednesday 10 September 2025 00:43:09 +0000 (0:00:00.166) 0:01:04.471 *** 2025-09-10 00:43:11.244379 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244396 | orchestrator | 2025-09-10 00:43:11.244415 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-10 00:43:11.244432 | orchestrator | Wednesday 10 September 2025 00:43:09 +0000 (0:00:00.149) 0:01:04.621 *** 2025-09-10 00:43:11.244446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:11.244474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:11.244493 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244511 | orchestrator | 2025-09-10 00:43:11.244530 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-10 00:43:11.244549 | orchestrator | Wednesday 10 September 2025 00:43:10 +0000 (0:00:00.158) 0:01:04.780 *** 2025-09-10 00:43:11.244569 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244631 | orchestrator | 2025-09-10 00:43:11.244644 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-10 00:43:11.244654 | orchestrator | Wednesday 10 September 2025 00:43:10 +0000 (0:00:00.146) 0:01:04.927 *** 2025-09-10 00:43:11.244665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:11.244676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:11.244686 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244697 | orchestrator | 2025-09-10 00:43:11.244707 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-10 00:43:11.244718 | orchestrator | Wednesday 10 September 2025 00:43:10 +0000 (0:00:00.161) 0:01:05.089 *** 2025-09-10 00:43:11.244728 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244739 | orchestrator | 2025-09-10 00:43:11.244749 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-10 00:43:11.244760 | orchestrator | Wednesday 10 September 2025 00:43:10 +0000 (0:00:00.137) 0:01:05.226 *** 2025-09-10 00:43:11.244770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:11.244781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:11.244792 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:11.244802 | orchestrator | 2025-09-10 00:43:11.244813 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-10 00:43:11.244823 | orchestrator | Wednesday 10 September 2025 00:43:10 +0000 (0:00:00.161) 0:01:05.388 *** 2025-09-10 00:43:11.244834 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:11.244845 | orchestrator | 2025-09-10 00:43:11.244863 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-10 00:43:11.244882 | orchestrator | Wednesday 10 September 2025 00:43:11 +0000 (0:00:00.352) 0:01:05.741 *** 2025-09-10 00:43:11.244967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:17.403535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:17.403689 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.403708 | orchestrator | 2025-09-10 00:43:17.403721 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-10 00:43:17.403734 | orchestrator | Wednesday 10 September 2025 00:43:11 +0000 (0:00:00.161) 0:01:05.902 *** 2025-09-10 00:43:17.403746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:17.403758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:17.403769 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.403780 | orchestrator | 2025-09-10 00:43:17.403791 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-10 00:43:17.403802 | orchestrator | Wednesday 10 September 2025 00:43:11 +0000 (0:00:00.155) 0:01:06.057 *** 2025-09-10 00:43:17.403813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:17.403824 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:17.403835 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.403866 | orchestrator | 2025-09-10 00:43:17.403878 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-10 00:43:17.403889 | orchestrator | Wednesday 10 September 2025 00:43:11 +0000 (0:00:00.184) 0:01:06.242 *** 2025-09-10 00:43:17.403900 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.403910 | orchestrator | 2025-09-10 00:43:17.403921 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-10 00:43:17.403932 | orchestrator | Wednesday 10 September 2025 00:43:11 +0000 (0:00:00.151) 0:01:06.394 *** 2025-09-10 00:43:17.403943 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.403954 | orchestrator | 2025-09-10 00:43:17.403964 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-10 00:43:17.403975 | orchestrator | Wednesday 10 September 2025 00:43:11 +0000 (0:00:00.148) 0:01:06.542 *** 2025-09-10 00:43:17.403985 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.403996 | orchestrator | 2025-09-10 00:43:17.404007 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-10 00:43:17.404031 | orchestrator | Wednesday 10 September 2025 00:43:12 +0000 (0:00:00.140) 0:01:06.683 *** 2025-09-10 00:43:17.404042 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 00:43:17.404054 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-10 00:43:17.404065 | orchestrator | } 2025-09-10 00:43:17.404076 | orchestrator | 2025-09-10 00:43:17.404087 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-10 00:43:17.404097 | orchestrator | Wednesday 10 September 2025 00:43:12 +0000 (0:00:00.137) 0:01:06.821 *** 2025-09-10 00:43:17.404108 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 00:43:17.404119 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-10 00:43:17.404129 | orchestrator | } 2025-09-10 00:43:17.404140 | orchestrator | 2025-09-10 00:43:17.404151 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-10 00:43:17.404162 | orchestrator | Wednesday 10 September 2025 00:43:12 +0000 (0:00:00.129) 0:01:06.950 *** 2025-09-10 00:43:17.404173 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 00:43:17.404184 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-10 00:43:17.404195 | orchestrator | } 2025-09-10 00:43:17.404206 | orchestrator | 2025-09-10 00:43:17.404217 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-10 00:43:17.404227 | orchestrator | Wednesday 10 September 2025 00:43:12 +0000 (0:00:00.140) 0:01:07.091 *** 2025-09-10 00:43:17.404238 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:17.404249 | orchestrator | 2025-09-10 00:43:17.404259 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-10 00:43:17.404270 | orchestrator | Wednesday 10 September 2025 00:43:12 +0000 (0:00:00.491) 0:01:07.583 *** 2025-09-10 00:43:17.404281 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:17.404292 | orchestrator | 2025-09-10 00:43:17.404302 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-10 00:43:17.404313 | orchestrator | Wednesday 10 September 2025 00:43:13 +0000 (0:00:00.495) 0:01:08.078 *** 2025-09-10 00:43:17.404323 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:17.404334 | orchestrator | 2025-09-10 00:43:17.404344 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-10 00:43:17.404355 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.730) 0:01:08.809 *** 2025-09-10 00:43:17.404366 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:17.404376 | orchestrator | 2025-09-10 00:43:17.404387 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-10 00:43:17.404398 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.140) 0:01:08.949 *** 2025-09-10 00:43:17.404408 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404419 | orchestrator | 2025-09-10 00:43:17.404429 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-10 00:43:17.404440 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.131) 0:01:09.080 *** 2025-09-10 00:43:17.404460 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404471 | orchestrator | 2025-09-10 00:43:17.404482 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-10 00:43:17.404492 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.118) 0:01:09.199 *** 2025-09-10 00:43:17.404503 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 00:43:17.404514 | orchestrator |  "vgs_report": { 2025-09-10 00:43:17.404524 | orchestrator |  "vg": [] 2025-09-10 00:43:17.404554 | orchestrator |  } 2025-09-10 00:43:17.404565 | orchestrator | } 2025-09-10 00:43:17.404576 | orchestrator | 2025-09-10 00:43:17.404587 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-10 00:43:17.404617 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.152) 0:01:09.352 *** 2025-09-10 00:43:17.404628 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404639 | orchestrator | 2025-09-10 00:43:17.404650 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-10 00:43:17.404660 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.155) 0:01:09.507 *** 2025-09-10 00:43:17.404671 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404682 | orchestrator | 2025-09-10 00:43:17.404693 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-10 00:43:17.404703 | orchestrator | Wednesday 10 September 2025 00:43:14 +0000 (0:00:00.154) 0:01:09.661 *** 2025-09-10 00:43:17.404714 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404725 | orchestrator | 2025-09-10 00:43:17.404735 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-10 00:43:17.404746 | orchestrator | Wednesday 10 September 2025 00:43:15 +0000 (0:00:00.137) 0:01:09.798 *** 2025-09-10 00:43:17.404757 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404768 | orchestrator | 2025-09-10 00:43:17.404779 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-10 00:43:17.404789 | orchestrator | Wednesday 10 September 2025 00:43:15 +0000 (0:00:00.147) 0:01:09.946 *** 2025-09-10 00:43:17.404800 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404811 | orchestrator | 2025-09-10 00:43:17.404822 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-10 00:43:17.404832 | orchestrator | Wednesday 10 September 2025 00:43:15 +0000 (0:00:00.141) 0:01:10.088 *** 2025-09-10 00:43:17.404843 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404854 | orchestrator | 2025-09-10 00:43:17.404865 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-10 00:43:17.404875 | orchestrator | Wednesday 10 September 2025 00:43:15 +0000 (0:00:00.135) 0:01:10.224 *** 2025-09-10 00:43:17.404886 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404897 | orchestrator | 2025-09-10 00:43:17.404908 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-10 00:43:17.404919 | orchestrator | Wednesday 10 September 2025 00:43:15 +0000 (0:00:00.127) 0:01:10.351 *** 2025-09-10 00:43:17.404929 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404940 | orchestrator | 2025-09-10 00:43:17.404951 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-10 00:43:17.404961 | orchestrator | Wednesday 10 September 2025 00:43:15 +0000 (0:00:00.135) 0:01:10.486 *** 2025-09-10 00:43:17.404972 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.404983 | orchestrator | 2025-09-10 00:43:17.404993 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-10 00:43:17.405010 | orchestrator | Wednesday 10 September 2025 00:43:16 +0000 (0:00:00.363) 0:01:10.849 *** 2025-09-10 00:43:17.405021 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405032 | orchestrator | 2025-09-10 00:43:17.405042 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-10 00:43:17.405053 | orchestrator | Wednesday 10 September 2025 00:43:16 +0000 (0:00:00.134) 0:01:10.984 *** 2025-09-10 00:43:17.405064 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405082 | orchestrator | 2025-09-10 00:43:17.405093 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-10 00:43:17.405103 | orchestrator | Wednesday 10 September 2025 00:43:16 +0000 (0:00:00.145) 0:01:11.130 *** 2025-09-10 00:43:17.405114 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405125 | orchestrator | 2025-09-10 00:43:17.405136 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-10 00:43:17.405147 | orchestrator | Wednesday 10 September 2025 00:43:16 +0000 (0:00:00.142) 0:01:11.273 *** 2025-09-10 00:43:17.405157 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405168 | orchestrator | 2025-09-10 00:43:17.405179 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-10 00:43:17.405190 | orchestrator | Wednesday 10 September 2025 00:43:16 +0000 (0:00:00.135) 0:01:11.408 *** 2025-09-10 00:43:17.405200 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405211 | orchestrator | 2025-09-10 00:43:17.405222 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-10 00:43:17.405233 | orchestrator | Wednesday 10 September 2025 00:43:16 +0000 (0:00:00.154) 0:01:11.562 *** 2025-09-10 00:43:17.405243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:17.405255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:17.405265 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405276 | orchestrator | 2025-09-10 00:43:17.405287 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-10 00:43:17.405297 | orchestrator | Wednesday 10 September 2025 00:43:17 +0000 (0:00:00.186) 0:01:11.749 *** 2025-09-10 00:43:17.405308 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:17.405319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:17.405330 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:17.405341 | orchestrator | 2025-09-10 00:43:17.405352 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-10 00:43:17.405362 | orchestrator | Wednesday 10 September 2025 00:43:17 +0000 (0:00:00.169) 0:01:11.919 *** 2025-09-10 00:43:17.405381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.555296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.555402 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.555418 | orchestrator | 2025-09-10 00:43:20.555430 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-10 00:43:20.555442 | orchestrator | Wednesday 10 September 2025 00:43:17 +0000 (0:00:00.145) 0:01:12.065 *** 2025-09-10 00:43:20.555452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.555462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.555472 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.555482 | orchestrator | 2025-09-10 00:43:20.555492 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-10 00:43:20.555501 | orchestrator | Wednesday 10 September 2025 00:43:17 +0000 (0:00:00.183) 0:01:12.248 *** 2025-09-10 00:43:20.555511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.555543 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.555554 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.555563 | orchestrator | 2025-09-10 00:43:20.555573 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-10 00:43:20.555583 | orchestrator | Wednesday 10 September 2025 00:43:17 +0000 (0:00:00.168) 0:01:12.416 *** 2025-09-10 00:43:20.555657 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.555670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.555680 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.555689 | orchestrator | 2025-09-10 00:43:20.555699 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-10 00:43:20.555709 | orchestrator | Wednesday 10 September 2025 00:43:17 +0000 (0:00:00.169) 0:01:12.586 *** 2025-09-10 00:43:20.555719 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.555729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.555738 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.555748 | orchestrator | 2025-09-10 00:43:20.555758 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-10 00:43:20.555768 | orchestrator | Wednesday 10 September 2025 00:43:18 +0000 (0:00:00.382) 0:01:12.968 *** 2025-09-10 00:43:20.555778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.555788 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.555798 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.555807 | orchestrator | 2025-09-10 00:43:20.555817 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-10 00:43:20.555826 | orchestrator | Wednesday 10 September 2025 00:43:18 +0000 (0:00:00.177) 0:01:13.145 *** 2025-09-10 00:43:20.555838 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:20.555849 | orchestrator | 2025-09-10 00:43:20.555861 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-10 00:43:20.555872 | orchestrator | Wednesday 10 September 2025 00:43:18 +0000 (0:00:00.517) 0:01:13.663 *** 2025-09-10 00:43:20.555883 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:20.555894 | orchestrator | 2025-09-10 00:43:20.555905 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-10 00:43:20.555916 | orchestrator | Wednesday 10 September 2025 00:43:19 +0000 (0:00:00.570) 0:01:14.234 *** 2025-09-10 00:43:20.555927 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:20.555938 | orchestrator | 2025-09-10 00:43:20.555949 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-10 00:43:20.555960 | orchestrator | Wednesday 10 September 2025 00:43:19 +0000 (0:00:00.152) 0:01:14.386 *** 2025-09-10 00:43:20.555971 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'vg_name': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'}) 2025-09-10 00:43:20.555983 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'vg_name': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'}) 2025-09-10 00:43:20.555994 | orchestrator | 2025-09-10 00:43:20.556006 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-10 00:43:20.556025 | orchestrator | Wednesday 10 September 2025 00:43:19 +0000 (0:00:00.162) 0:01:14.549 *** 2025-09-10 00:43:20.556053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.556065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.556076 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.556091 | orchestrator | 2025-09-10 00:43:20.556106 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-10 00:43:20.556117 | orchestrator | Wednesday 10 September 2025 00:43:20 +0000 (0:00:00.173) 0:01:14.722 *** 2025-09-10 00:43:20.556128 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.556139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.556151 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.556162 | orchestrator | 2025-09-10 00:43:20.556173 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-10 00:43:20.556183 | orchestrator | Wednesday 10 September 2025 00:43:20 +0000 (0:00:00.153) 0:01:14.875 *** 2025-09-10 00:43:20.556193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'})  2025-09-10 00:43:20.556221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'})  2025-09-10 00:43:20.556231 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:20.556241 | orchestrator | 2025-09-10 00:43:20.556251 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-10 00:43:20.556260 | orchestrator | Wednesday 10 September 2025 00:43:20 +0000 (0:00:00.167) 0:01:15.043 *** 2025-09-10 00:43:20.556269 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 00:43:20.556279 | orchestrator |  "lvm_report": { 2025-09-10 00:43:20.556289 | orchestrator |  "lv": [ 2025-09-10 00:43:20.556298 | orchestrator |  { 2025-09-10 00:43:20.556308 | orchestrator |  "lv_name": "osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7", 2025-09-10 00:43:20.556323 | orchestrator |  "vg_name": "ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7" 2025-09-10 00:43:20.556332 | orchestrator |  }, 2025-09-10 00:43:20.556342 | orchestrator |  { 2025-09-10 00:43:20.556351 | orchestrator |  "lv_name": "osd-block-f4115e81-926e-57fb-8145-65084efa4466", 2025-09-10 00:43:20.556361 | orchestrator |  "vg_name": "ceph-f4115e81-926e-57fb-8145-65084efa4466" 2025-09-10 00:43:20.556370 | orchestrator |  } 2025-09-10 00:43:20.556380 | orchestrator |  ], 2025-09-10 00:43:20.556389 | orchestrator |  "pv": [ 2025-09-10 00:43:20.556399 | orchestrator |  { 2025-09-10 00:43:20.556408 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-10 00:43:20.556417 | orchestrator |  "vg_name": "ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7" 2025-09-10 00:43:20.556427 | orchestrator |  }, 2025-09-10 00:43:20.556436 | orchestrator |  { 2025-09-10 00:43:20.556445 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-10 00:43:20.556455 | orchestrator |  "vg_name": "ceph-f4115e81-926e-57fb-8145-65084efa4466" 2025-09-10 00:43:20.556465 | orchestrator |  } 2025-09-10 00:43:20.556474 | orchestrator |  ] 2025-09-10 00:43:20.556483 | orchestrator |  } 2025-09-10 00:43:20.556493 | orchestrator | } 2025-09-10 00:43:20.556502 | orchestrator | 2025-09-10 00:43:20.556512 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:43:20.556528 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-10 00:43:20.556538 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-10 00:43:20.556548 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-10 00:43:20.556557 | orchestrator | 2025-09-10 00:43:20.556566 | orchestrator | 2025-09-10 00:43:20.556576 | orchestrator | 2025-09-10 00:43:20.556585 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:43:20.556623 | orchestrator | Wednesday 10 September 2025 00:43:20 +0000 (0:00:00.145) 0:01:15.189 *** 2025-09-10 00:43:20.556634 | orchestrator | =============================================================================== 2025-09-10 00:43:20.556643 | orchestrator | Create block VGs -------------------------------------------------------- 5.81s 2025-09-10 00:43:20.556653 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2025-09-10 00:43:20.556662 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.94s 2025-09-10 00:43:20.556671 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.80s 2025-09-10 00:43:20.556681 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-09-10 00:43:20.556690 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2025-09-10 00:43:20.556699 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-09-10 00:43:20.556709 | orchestrator | Add known partitions to the list of available block devices ------------- 1.52s 2025-09-10 00:43:20.556725 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2025-09-10 00:43:20.970665 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2025-09-10 00:43:20.970775 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2025-09-10 00:43:20.970790 | orchestrator | Print LVM report data --------------------------------------------------- 0.91s 2025-09-10 00:43:20.970802 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.86s 2025-09-10 00:43:20.970813 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-09-10 00:43:20.970823 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.80s 2025-09-10 00:43:20.970834 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2025-09-10 00:43:20.970845 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.75s 2025-09-10 00:43:20.970855 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2025-09-10 00:43:20.970866 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-09-10 00:43:20.970877 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2025-09-10 00:43:33.418531 | orchestrator | 2025-09-10 00:43:33 | INFO  | Task 4b28767a-3479-41f5-90c6-a8b56df7bd06 (facts) was prepared for execution. 2025-09-10 00:43:33.418666 | orchestrator | 2025-09-10 00:43:33 | INFO  | It takes a moment until task 4b28767a-3479-41f5-90c6-a8b56df7bd06 (facts) has been started and output is visible here. 2025-09-10 00:43:46.904478 | orchestrator | 2025-09-10 00:43:46.904573 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-10 00:43:46.904626 | orchestrator | 2025-09-10 00:43:46.904639 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-10 00:43:46.904651 | orchestrator | Wednesday 10 September 2025 00:43:37 +0000 (0:00:00.279) 0:00:00.279 *** 2025-09-10 00:43:46.904662 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:43:46.904673 | orchestrator | ok: [testbed-manager] 2025-09-10 00:43:46.904732 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:43:46.904745 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:43:46.904755 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:43:46.904766 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:43:46.904776 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:46.904786 | orchestrator | 2025-09-10 00:43:46.904797 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-10 00:43:46.904808 | orchestrator | Wednesday 10 September 2025 00:43:38 +0000 (0:00:01.125) 0:00:01.405 *** 2025-09-10 00:43:46.904831 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:43:46.904843 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:43:46.904854 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:43:46.904864 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:43:46.904875 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:43:46.904885 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:43:46.904896 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:46.904907 | orchestrator | 2025-09-10 00:43:46.904918 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-10 00:43:46.904928 | orchestrator | 2025-09-10 00:43:46.904939 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-10 00:43:46.904950 | orchestrator | Wednesday 10 September 2025 00:43:40 +0000 (0:00:01.256) 0:00:02.662 *** 2025-09-10 00:43:46.904960 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:43:46.904974 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:43:46.904993 | orchestrator | ok: [testbed-manager] 2025-09-10 00:43:46.905010 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:43:46.905028 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:43:46.905048 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:43:46.905067 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:43:46.905079 | orchestrator | 2025-09-10 00:43:46.905092 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-10 00:43:46.905104 | orchestrator | 2025-09-10 00:43:46.905120 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-10 00:43:46.905139 | orchestrator | Wednesday 10 September 2025 00:43:45 +0000 (0:00:05.916) 0:00:08.579 *** 2025-09-10 00:43:46.905158 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:43:46.905176 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:43:46.905198 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:43:46.905218 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:43:46.905236 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:43:46.905254 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:43:46.905274 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:43:46.905286 | orchestrator | 2025-09-10 00:43:46.905296 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:43:46.905307 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905319 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905330 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905340 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905351 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905361 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905372 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:43:46.905394 | orchestrator | 2025-09-10 00:43:46.905405 | orchestrator | 2025-09-10 00:43:46.905416 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:43:46.905426 | orchestrator | Wednesday 10 September 2025 00:43:46 +0000 (0:00:00.509) 0:00:09.088 *** 2025-09-10 00:43:46.905437 | orchestrator | =============================================================================== 2025-09-10 00:43:46.905448 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.92s 2025-09-10 00:43:46.905458 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-09-10 00:43:46.905469 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-09-10 00:43:46.905479 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-10 00:43:59.187640 | orchestrator | 2025-09-10 00:43:59 | INFO  | Task 4bd59b0f-5903-4fff-b3f5-5bf08e9e6fd8 (frr) was prepared for execution. 2025-09-10 00:43:59.187723 | orchestrator | 2025-09-10 00:43:59 | INFO  | It takes a moment until task 4bd59b0f-5903-4fff-b3f5-5bf08e9e6fd8 (frr) has been started and output is visible here. 2025-09-10 00:44:27.658932 | orchestrator | 2025-09-10 00:44:27.659023 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-10 00:44:27.659039 | orchestrator | 2025-09-10 00:44:27.659052 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-10 00:44:27.659063 | orchestrator | Wednesday 10 September 2025 00:44:03 +0000 (0:00:00.239) 0:00:00.239 *** 2025-09-10 00:44:27.659074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-10 00:44:27.659086 | orchestrator | 2025-09-10 00:44:27.659096 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-10 00:44:27.659107 | orchestrator | Wednesday 10 September 2025 00:44:03 +0000 (0:00:00.211) 0:00:00.450 *** 2025-09-10 00:44:27.659118 | orchestrator | changed: [testbed-manager] 2025-09-10 00:44:27.659129 | orchestrator | 2025-09-10 00:44:27.659140 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-10 00:44:27.659150 | orchestrator | Wednesday 10 September 2025 00:44:04 +0000 (0:00:01.161) 0:00:01.612 *** 2025-09-10 00:44:27.659161 | orchestrator | changed: [testbed-manager] 2025-09-10 00:44:27.659172 | orchestrator | 2025-09-10 00:44:27.659196 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-10 00:44:27.659208 | orchestrator | Wednesday 10 September 2025 00:44:15 +0000 (0:00:11.068) 0:00:12.680 *** 2025-09-10 00:44:27.659218 | orchestrator | ok: [testbed-manager] 2025-09-10 00:44:27.659229 | orchestrator | 2025-09-10 00:44:27.659240 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-10 00:44:27.659251 | orchestrator | Wednesday 10 September 2025 00:44:17 +0000 (0:00:01.348) 0:00:14.029 *** 2025-09-10 00:44:27.659261 | orchestrator | changed: [testbed-manager] 2025-09-10 00:44:27.659272 | orchestrator | 2025-09-10 00:44:27.659283 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-10 00:44:27.659293 | orchestrator | Wednesday 10 September 2025 00:44:18 +0000 (0:00:00.979) 0:00:15.009 *** 2025-09-10 00:44:27.659304 | orchestrator | ok: [testbed-manager] 2025-09-10 00:44:27.659314 | orchestrator | 2025-09-10 00:44:27.659325 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-10 00:44:27.659336 | orchestrator | Wednesday 10 September 2025 00:44:19 +0000 (0:00:01.201) 0:00:16.210 *** 2025-09-10 00:44:27.659347 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:44:27.659358 | orchestrator | 2025-09-10 00:44:27.659368 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-10 00:44:27.659379 | orchestrator | Wednesday 10 September 2025 00:44:20 +0000 (0:00:00.859) 0:00:17.070 *** 2025-09-10 00:44:27.659389 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:44:27.659400 | orchestrator | 2025-09-10 00:44:27.659411 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-10 00:44:27.659443 | orchestrator | Wednesday 10 September 2025 00:44:20 +0000 (0:00:00.183) 0:00:17.253 *** 2025-09-10 00:44:27.659454 | orchestrator | changed: [testbed-manager] 2025-09-10 00:44:27.659465 | orchestrator | 2025-09-10 00:44:27.659476 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-10 00:44:27.659486 | orchestrator | Wednesday 10 September 2025 00:44:21 +0000 (0:00:00.983) 0:00:18.237 *** 2025-09-10 00:44:27.659499 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-10 00:44:27.659511 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-10 00:44:27.659525 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-10 00:44:27.659537 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-10 00:44:27.659550 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-10 00:44:27.659562 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-10 00:44:27.659574 | orchestrator | 2025-09-10 00:44:27.659616 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-10 00:44:27.659629 | orchestrator | Wednesday 10 September 2025 00:44:24 +0000 (0:00:03.270) 0:00:21.507 *** 2025-09-10 00:44:27.659641 | orchestrator | ok: [testbed-manager] 2025-09-10 00:44:27.659653 | orchestrator | 2025-09-10 00:44:27.659665 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-10 00:44:27.659677 | orchestrator | Wednesday 10 September 2025 00:44:25 +0000 (0:00:01.400) 0:00:22.907 *** 2025-09-10 00:44:27.659689 | orchestrator | changed: [testbed-manager] 2025-09-10 00:44:27.659701 | orchestrator | 2025-09-10 00:44:27.659713 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:44:27.659726 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 00:44:27.659738 | orchestrator | 2025-09-10 00:44:27.659750 | orchestrator | 2025-09-10 00:44:27.659762 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:44:27.659774 | orchestrator | Wednesday 10 September 2025 00:44:27 +0000 (0:00:01.416) 0:00:24.324 *** 2025-09-10 00:44:27.659786 | orchestrator | =============================================================================== 2025-09-10 00:44:27.659799 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.07s 2025-09-10 00:44:27.659811 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.27s 2025-09-10 00:44:27.659824 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2025-09-10 00:44:27.659836 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.40s 2025-09-10 00:44:27.659864 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.35s 2025-09-10 00:44:27.659876 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2025-09-10 00:44:27.659887 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.16s 2025-09-10 00:44:27.659897 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.98s 2025-09-10 00:44:27.659908 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2025-09-10 00:44:27.659919 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.86s 2025-09-10 00:44:27.659929 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2025-09-10 00:44:27.659940 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.18s 2025-09-10 00:44:28.016784 | orchestrator | 2025-09-10 00:44:28.019992 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Sep 10 00:44:28 UTC 2025 2025-09-10 00:44:28.020040 | orchestrator | 2025-09-10 00:44:29.938535 | orchestrator | 2025-09-10 00:44:29 | INFO  | Collection nutshell is prepared for execution 2025-09-10 00:44:29.938660 | orchestrator | 2025-09-10 00:44:29 | INFO  | D [0] - dotfiles 2025-09-10 00:44:39.969046 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [0] - homer 2025-09-10 00:44:39.969138 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [0] - netdata 2025-09-10 00:44:39.969155 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [0] - openstackclient 2025-09-10 00:44:39.969348 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [0] - phpmyadmin 2025-09-10 00:44:39.969428 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [0] - common 2025-09-10 00:44:39.973779 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [1] -- loadbalancer 2025-09-10 00:44:39.973884 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [2] --- opensearch 2025-09-10 00:44:39.974459 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [2] --- mariadb-ng 2025-09-10 00:44:39.974792 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [3] ---- horizon 2025-09-10 00:44:39.974971 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [3] ---- keystone 2025-09-10 00:44:39.975325 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [4] ----- neutron 2025-09-10 00:44:39.975457 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [5] ------ wait-for-nova 2025-09-10 00:44:39.975962 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [5] ------ octavia 2025-09-10 00:44:39.977373 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [4] ----- barbican 2025-09-10 00:44:39.977394 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [4] ----- designate 2025-09-10 00:44:39.977767 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [4] ----- ironic 2025-09-10 00:44:39.977789 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [4] ----- placement 2025-09-10 00:44:39.978564 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [4] ----- magnum 2025-09-10 00:44:39.979172 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [1] -- openvswitch 2025-09-10 00:44:39.979306 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [2] --- ovn 2025-09-10 00:44:39.979833 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [1] -- memcached 2025-09-10 00:44:39.979853 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [1] -- redis 2025-09-10 00:44:39.980016 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [1] -- rabbitmq-ng 2025-09-10 00:44:39.980752 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [0] - kubernetes 2025-09-10 00:44:39.983938 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [1] -- kubeconfig 2025-09-10 00:44:39.983968 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [1] -- copy-kubeconfig 2025-09-10 00:44:39.983979 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [0] - ceph 2025-09-10 00:44:39.985917 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [1] -- ceph-pools 2025-09-10 00:44:39.986214 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [2] --- copy-ceph-keys 2025-09-10 00:44:39.986234 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [3] ---- cephclient 2025-09-10 00:44:39.986245 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-10 00:44:39.986569 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [4] ----- wait-for-keystone 2025-09-10 00:44:39.986613 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-10 00:44:39.986876 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [5] ------ glance 2025-09-10 00:44:39.986897 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [5] ------ cinder 2025-09-10 00:44:39.987512 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [5] ------ nova 2025-09-10 00:44:39.987557 | orchestrator | 2025-09-10 00:44:39 | INFO  | A [4] ----- prometheus 2025-09-10 00:44:39.987999 | orchestrator | 2025-09-10 00:44:39 | INFO  | D [5] ------ grafana 2025-09-10 00:44:40.183923 | orchestrator | 2025-09-10 00:44:40 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-10 00:44:40.186143 | orchestrator | 2025-09-10 00:44:40 | INFO  | Tasks are running in the background 2025-09-10 00:44:43.281785 | orchestrator | 2025-09-10 00:44:43 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-10 00:44:45.380514 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:44:45.380647 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:44:45.381060 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:44:45.381604 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:44:45.382133 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:44:45.382700 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:44:45.383147 | orchestrator | 2025-09-10 00:44:45 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:44:45.383622 | orchestrator | 2025-09-10 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:44:48.429278 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:44:48.429381 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:44:48.429395 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:44:48.429406 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:44:48.429417 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:44:48.431790 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:44:48.431826 | orchestrator | 2025-09-10 00:44:48 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:44:48.431838 | orchestrator | 2025-09-10 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:44:51.465294 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:44:51.465389 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:44:51.465829 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:44:51.466461 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:44:51.466950 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:44:51.467446 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:44:51.468091 | orchestrator | 2025-09-10 00:44:51 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:44:51.468117 | orchestrator | 2025-09-10 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:44:54.607129 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:44:54.607268 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:44:54.607285 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:44:54.607298 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:44:54.607309 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:44:54.607321 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:44:54.607332 | orchestrator | 2025-09-10 00:44:54 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:44:54.607344 | orchestrator | 2025-09-10 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:44:57.602214 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:44:57.602315 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:44:57.602329 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:44:57.602342 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:44:57.602353 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:44:57.602364 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:44:57.602375 | orchestrator | 2025-09-10 00:44:57 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:44:57.602386 | orchestrator | 2025-09-10 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:00.633566 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:45:00.658572 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:00.658679 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:00.658692 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:00.658703 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:00.658715 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:00.658725 | orchestrator | 2025-09-10 00:45:00 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:00.658737 | orchestrator | 2025-09-10 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:03.769095 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state STARTED 2025-09-10 00:45:03.769182 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:03.769197 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:03.769209 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:03.769245 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:03.769257 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:03.769268 | orchestrator | 2025-09-10 00:45:03 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:03.769279 | orchestrator | 2025-09-10 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:06.822327 | orchestrator | 2025-09-10 00:45:06.822410 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-10 00:45:06.822425 | orchestrator | 2025-09-10 00:45:06.822436 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-10 00:45:06.822447 | orchestrator | Wednesday 10 September 2025 00:44:53 +0000 (0:00:00.623) 0:00:00.623 *** 2025-09-10 00:45:06.822458 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:45:06.822470 | orchestrator | changed: [testbed-manager] 2025-09-10 00:45:06.822480 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:45:06.822491 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:45:06.822501 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:45:06.822512 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:45:06.822522 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:45:06.822533 | orchestrator | 2025-09-10 00:45:06.822544 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-10 00:45:06.822554 | orchestrator | Wednesday 10 September 2025 00:44:57 +0000 (0:00:04.407) 0:00:05.030 *** 2025-09-10 00:45:06.822565 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-10 00:45:06.822615 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-10 00:45:06.822627 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-10 00:45:06.822638 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-10 00:45:06.822649 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-10 00:45:06.822660 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-10 00:45:06.822670 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-10 00:45:06.822681 | orchestrator | 2025-09-10 00:45:06.822692 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-10 00:45:06.822703 | orchestrator | Wednesday 10 September 2025 00:44:58 +0000 (0:00:01.333) 0:00:06.364 *** 2025-09-10 00:45:06.822718 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.169733', 'end': '2025-09-10 00:44:58.183073', 'delta': '0:00:00.013340', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822746 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.231332', 'end': '2025-09-10 00:44:58.242164', 'delta': '0:00:00.010832', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822779 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.305727', 'end': '2025-09-10 00:44:58.312701', 'delta': '0:00:00.006974', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822809 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.409446', 'end': '2025-09-10 00:44:58.418943', 'delta': '0:00:00.009497', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822821 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.518875', 'end': '2025-09-10 00:44:58.526466', 'delta': '0:00:00.007591', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822832 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.771190', 'end': '2025-09-10 00:44:58.781237', 'delta': '0:00:00.010047', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822848 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-10 00:44:58.725370', 'end': '2025-09-10 00:44:58.734260', 'delta': '0:00:00.008890', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-10 00:45:06.822872 | orchestrator | 2025-09-10 00:45:06.822884 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-10 00:45:06.822895 | orchestrator | Wednesday 10 September 2025 00:45:01 +0000 (0:00:02.147) 0:00:08.511 *** 2025-09-10 00:45:06.822906 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-10 00:45:06.822918 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-10 00:45:06.822931 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-10 00:45:06.822944 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-10 00:45:06.822956 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-10 00:45:06.822969 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-10 00:45:06.822982 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-10 00:45:06.822995 | orchestrator | 2025-09-10 00:45:06.823007 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-10 00:45:06.823020 | orchestrator | Wednesday 10 September 2025 00:45:02 +0000 (0:00:01.223) 0:00:09.735 *** 2025-09-10 00:45:06.823032 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-10 00:45:06.823045 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-10 00:45:06.823058 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-10 00:45:06.823070 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-10 00:45:06.823082 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-10 00:45:06.823095 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-10 00:45:06.823108 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-10 00:45:06.823119 | orchestrator | 2025-09-10 00:45:06.823133 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:45:06.823152 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823166 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823178 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823191 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823204 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823216 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823229 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:45:06.823241 | orchestrator | 2025-09-10 00:45:06.823254 | orchestrator | 2025-09-10 00:45:06.823267 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:45:06.823278 | orchestrator | Wednesday 10 September 2025 00:45:04 +0000 (0:00:02.482) 0:00:12.217 *** 2025-09-10 00:45:06.823289 | orchestrator | =============================================================================== 2025-09-10 00:45:06.823300 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.41s 2025-09-10 00:45:06.823316 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.48s 2025-09-10 00:45:06.823347 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.15s 2025-09-10 00:45:06.823367 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.33s 2025-09-10 00:45:06.823386 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.22s 2025-09-10 00:45:06.823405 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task f4d6d817-1a84-47eb-a781-997af2bc942d is in state SUCCESS 2025-09-10 00:45:06.823424 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:06.823749 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:06.825313 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:06.825340 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:06.826516 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:06.828424 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:06.832232 | orchestrator | 2025-09-10 00:45:06 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:06.832254 | orchestrator | 2025-09-10 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:09.902270 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:09.902351 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:09.902363 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:09.902373 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:09.902382 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:09.902390 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:09.906653 | orchestrator | 2025-09-10 00:45:09 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:09.906696 | orchestrator | 2025-09-10 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:12.980071 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:12.980187 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:12.980205 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:12.980217 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:12.980228 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:12.980239 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:12.980249 | orchestrator | 2025-09-10 00:45:12 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:12.980260 | orchestrator | 2025-09-10 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:16.071297 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:16.072426 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:16.072461 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:16.072474 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:16.072485 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:16.072496 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:16.072507 | orchestrator | 2025-09-10 00:45:16 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:16.072518 | orchestrator | 2025-09-10 00:45:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:19.106206 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:19.108874 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:19.108957 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:19.109735 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:19.110827 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:19.112261 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:19.114276 | orchestrator | 2025-09-10 00:45:19 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:19.114325 | orchestrator | 2025-09-10 00:45:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:22.374247 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:22.374319 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:22.374333 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:22.374339 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:22.374344 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:22.374349 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:22.374355 | orchestrator | 2025-09-10 00:45:22 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:22.374360 | orchestrator | 2025-09-10 00:45:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:25.339836 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:25.339937 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:25.339951 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:25.339964 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:25.339975 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:25.340010 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:25.340022 | orchestrator | 2025-09-10 00:45:25 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:25.340033 | orchestrator | 2025-09-10 00:45:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:28.521019 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:28.521147 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:28.521166 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:28.521179 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:28.521190 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:28.521201 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:28.521212 | orchestrator | 2025-09-10 00:45:28 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:28.521223 | orchestrator | 2025-09-10 00:45:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:31.526059 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:31.526164 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:31.526179 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:31.526503 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:31.526516 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:31.526526 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:31.526536 | orchestrator | 2025-09-10 00:45:31 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:31.526549 | orchestrator | 2025-09-10 00:45:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:34.524684 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:34.524789 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:34.524805 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:34.524817 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:34.524828 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:34.526774 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state STARTED 2025-09-10 00:45:34.527866 | orchestrator | 2025-09-10 00:45:34 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:34.527889 | orchestrator | 2025-09-10 00:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:37.560327 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:37.561688 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:37.562323 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:37.564368 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:37.564980 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:37.565783 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task 3f33e14e-ea88-4518-b5de-da7d227fdf66 is in state SUCCESS 2025-09-10 00:45:37.568048 | orchestrator | 2025-09-10 00:45:37 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:37.568142 | orchestrator | 2025-09-10 00:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:40.624480 | orchestrator | 2025-09-10 00:45:40 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:40.624637 | orchestrator | 2025-09-10 00:45:40 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:40.624655 | orchestrator | 2025-09-10 00:45:40 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:40.624667 | orchestrator | 2025-09-10 00:45:40 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:40.626163 | orchestrator | 2025-09-10 00:45:40 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:40.628188 | orchestrator | 2025-09-10 00:45:40 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state STARTED 2025-09-10 00:45:40.628471 | orchestrator | 2025-09-10 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:43.672156 | orchestrator | 2025-09-10 00:45:43 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:43.674765 | orchestrator | 2025-09-10 00:45:43 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:43.675191 | orchestrator | 2025-09-10 00:45:43 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:43.678149 | orchestrator | 2025-09-10 00:45:43 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:43.678838 | orchestrator | 2025-09-10 00:45:43 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:43.679420 | orchestrator | 2025-09-10 00:45:43 | INFO  | Task 030ca014-ff8b-4482-af90-85efb42b42ed is in state SUCCESS 2025-09-10 00:45:43.679445 | orchestrator | 2025-09-10 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:46.732272 | orchestrator | 2025-09-10 00:45:46 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:46.733303 | orchestrator | 2025-09-10 00:45:46 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:46.733469 | orchestrator | 2025-09-10 00:45:46 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:46.733503 | orchestrator | 2025-09-10 00:45:46 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:46.734269 | orchestrator | 2025-09-10 00:45:46 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:46.734315 | orchestrator | 2025-09-10 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:49.807369 | orchestrator | 2025-09-10 00:45:49 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:49.808241 | orchestrator | 2025-09-10 00:45:49 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:49.812765 | orchestrator | 2025-09-10 00:45:49 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:49.815878 | orchestrator | 2025-09-10 00:45:49 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:49.817167 | orchestrator | 2025-09-10 00:45:49 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:49.817190 | orchestrator | 2025-09-10 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:52.857093 | orchestrator | 2025-09-10 00:45:52 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:52.863759 | orchestrator | 2025-09-10 00:45:52 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:52.871091 | orchestrator | 2025-09-10 00:45:52 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:52.879127 | orchestrator | 2025-09-10 00:45:52 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:52.887991 | orchestrator | 2025-09-10 00:45:52 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:52.888022 | orchestrator | 2025-09-10 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:55.932460 | orchestrator | 2025-09-10 00:45:55 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:55.932894 | orchestrator | 2025-09-10 00:45:55 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:55.933586 | orchestrator | 2025-09-10 00:45:55 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:55.934226 | orchestrator | 2025-09-10 00:45:55 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:55.935196 | orchestrator | 2025-09-10 00:45:55 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:55.935218 | orchestrator | 2025-09-10 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:45:59.005309 | orchestrator | 2025-09-10 00:45:59 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:45:59.010708 | orchestrator | 2025-09-10 00:45:59 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:45:59.015742 | orchestrator | 2025-09-10 00:45:59 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:45:59.026529 | orchestrator | 2025-09-10 00:45:59 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:45:59.038714 | orchestrator | 2025-09-10 00:45:59 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:45:59.038765 | orchestrator | 2025-09-10 00:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:02.083829 | orchestrator | 2025-09-10 00:46:02 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state STARTED 2025-09-10 00:46:02.083936 | orchestrator | 2025-09-10 00:46:02 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:02.085657 | orchestrator | 2025-09-10 00:46:02 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:02.087923 | orchestrator | 2025-09-10 00:46:02 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:02.088795 | orchestrator | 2025-09-10 00:46:02 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:46:02.088849 | orchestrator | 2025-09-10 00:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:05.156980 | orchestrator | 2025-09-10 00:46:05.158063 | orchestrator | 2025-09-10 00:46:05.158141 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-10 00:46:05.158156 | orchestrator | 2025-09-10 00:46:05.158168 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-10 00:46:05.158179 | orchestrator | Wednesday 10 September 2025 00:44:53 +0000 (0:00:00.686) 0:00:00.686 *** 2025-09-10 00:46:05.158189 | orchestrator | ok: [testbed-manager] => { 2025-09-10 00:46:05.158201 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-10 00:46:05.158213 | orchestrator | } 2025-09-10 00:46:05.158223 | orchestrator | 2025-09-10 00:46:05.158232 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-10 00:46:05.158242 | orchestrator | Wednesday 10 September 2025 00:44:53 +0000 (0:00:00.585) 0:00:01.271 *** 2025-09-10 00:46:05.158252 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.158262 | orchestrator | 2025-09-10 00:46:05.158271 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-10 00:46:05.158281 | orchestrator | Wednesday 10 September 2025 00:44:55 +0000 (0:00:01.838) 0:00:03.110 *** 2025-09-10 00:46:05.158291 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-10 00:46:05.158301 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-10 00:46:05.158311 | orchestrator | 2025-09-10 00:46:05.158320 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-10 00:46:05.158330 | orchestrator | Wednesday 10 September 2025 00:44:56 +0000 (0:00:00.979) 0:00:04.089 *** 2025-09-10 00:46:05.158339 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.158349 | orchestrator | 2025-09-10 00:46:05.158358 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-10 00:46:05.158368 | orchestrator | Wednesday 10 September 2025 00:44:59 +0000 (0:00:02.769) 0:00:06.859 *** 2025-09-10 00:46:05.158377 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.158387 | orchestrator | 2025-09-10 00:46:05.158397 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-10 00:46:05.158406 | orchestrator | Wednesday 10 September 2025 00:45:01 +0000 (0:00:02.374) 0:00:09.233 *** 2025-09-10 00:46:05.158416 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-10 00:46:05.158425 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.158435 | orchestrator | 2025-09-10 00:46:05.158445 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-10 00:46:05.158454 | orchestrator | Wednesday 10 September 2025 00:45:31 +0000 (0:00:29.636) 0:00:38.869 *** 2025-09-10 00:46:05.158464 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.158473 | orchestrator | 2025-09-10 00:46:05.158483 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:46:05.158493 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:05.158504 | orchestrator | 2025-09-10 00:46:05.158514 | orchestrator | 2025-09-10 00:46:05.158541 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:46:05.158583 | orchestrator | Wednesday 10 September 2025 00:45:34 +0000 (0:00:03.070) 0:00:41.940 *** 2025-09-10 00:46:05.158593 | orchestrator | =============================================================================== 2025-09-10 00:46:05.158603 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.64s 2025-09-10 00:46:05.158613 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.07s 2025-09-10 00:46:05.158622 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.77s 2025-09-10 00:46:05.158632 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.37s 2025-09-10 00:46:05.158667 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.84s 2025-09-10 00:46:05.158677 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.98s 2025-09-10 00:46:05.158686 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.59s 2025-09-10 00:46:05.158696 | orchestrator | 2025-09-10 00:46:05.158705 | orchestrator | 2025-09-10 00:46:05.158715 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-10 00:46:05.158724 | orchestrator | 2025-09-10 00:46:05.158734 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-10 00:46:05.158743 | orchestrator | Wednesday 10 September 2025 00:44:54 +0000 (0:00:00.820) 0:00:00.820 *** 2025-09-10 00:46:05.158753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-10 00:46:05.158764 | orchestrator | 2025-09-10 00:46:05.158774 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-10 00:46:05.158783 | orchestrator | Wednesday 10 September 2025 00:44:55 +0000 (0:00:00.903) 0:00:01.723 *** 2025-09-10 00:46:05.158792 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-10 00:46:05.158802 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-10 00:46:05.158812 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-10 00:46:05.158821 | orchestrator | 2025-09-10 00:46:05.158831 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-10 00:46:05.158840 | orchestrator | Wednesday 10 September 2025 00:44:56 +0000 (0:00:01.335) 0:00:03.059 *** 2025-09-10 00:46:05.158850 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.158859 | orchestrator | 2025-09-10 00:46:05.158869 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-10 00:46:05.158879 | orchestrator | Wednesday 10 September 2025 00:44:58 +0000 (0:00:01.451) 0:00:04.510 *** 2025-09-10 00:46:05.158912 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-10 00:46:05.158923 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.158932 | orchestrator | 2025-09-10 00:46:05.158942 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-10 00:46:05.158952 | orchestrator | Wednesday 10 September 2025 00:45:33 +0000 (0:00:35.165) 0:00:39.675 *** 2025-09-10 00:46:05.158961 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.158971 | orchestrator | 2025-09-10 00:46:05.158980 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-10 00:46:05.158990 | orchestrator | Wednesday 10 September 2025 00:45:35 +0000 (0:00:01.907) 0:00:41.583 *** 2025-09-10 00:46:05.158999 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.159009 | orchestrator | 2025-09-10 00:46:05.159019 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-10 00:46:05.159029 | orchestrator | Wednesday 10 September 2025 00:45:36 +0000 (0:00:00.964) 0:00:42.547 *** 2025-09-10 00:46:05.159038 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.159048 | orchestrator | 2025-09-10 00:46:05.159057 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-10 00:46:05.159067 | orchestrator | Wednesday 10 September 2025 00:45:38 +0000 (0:00:02.715) 0:00:45.262 *** 2025-09-10 00:46:05.159077 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.159086 | orchestrator | 2025-09-10 00:46:05.159096 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-10 00:46:05.159158 | orchestrator | Wednesday 10 September 2025 00:45:40 +0000 (0:00:01.333) 0:00:46.596 *** 2025-09-10 00:46:05.159168 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.159178 | orchestrator | 2025-09-10 00:46:05.159187 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-10 00:46:05.159197 | orchestrator | Wednesday 10 September 2025 00:45:41 +0000 (0:00:00.937) 0:00:47.533 *** 2025-09-10 00:46:05.159215 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.159224 | orchestrator | 2025-09-10 00:46:05.159234 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:46:05.159244 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:05.159253 | orchestrator | 2025-09-10 00:46:05.159263 | orchestrator | 2025-09-10 00:46:05.159306 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:46:05.159317 | orchestrator | Wednesday 10 September 2025 00:45:41 +0000 (0:00:00.442) 0:00:47.975 *** 2025-09-10 00:46:05.159327 | orchestrator | =============================================================================== 2025-09-10 00:46:05.159336 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.17s 2025-09-10 00:46:05.159346 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.72s 2025-09-10 00:46:05.159355 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.91s 2025-09-10 00:46:05.159371 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.45s 2025-09-10 00:46:05.159381 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.34s 2025-09-10 00:46:05.159391 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.33s 2025-09-10 00:46:05.159400 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.96s 2025-09-10 00:46:05.159410 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.94s 2025-09-10 00:46:05.159419 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.90s 2025-09-10 00:46:05.159429 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2025-09-10 00:46:05.159438 | orchestrator | 2025-09-10 00:46:05.159447 | orchestrator | 2025-09-10 00:46:05.159457 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-10 00:46:05.159466 | orchestrator | 2025-09-10 00:46:05.159476 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-10 00:46:05.159485 | orchestrator | Wednesday 10 September 2025 00:45:09 +0000 (0:00:00.188) 0:00:00.188 *** 2025-09-10 00:46:05.159495 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.159505 | orchestrator | 2025-09-10 00:46:05.159514 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-10 00:46:05.159524 | orchestrator | Wednesday 10 September 2025 00:45:11 +0000 (0:00:01.889) 0:00:02.078 *** 2025-09-10 00:46:05.159533 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-10 00:46:05.159543 | orchestrator | 2025-09-10 00:46:05.159570 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-10 00:46:05.159580 | orchestrator | Wednesday 10 September 2025 00:45:11 +0000 (0:00:00.723) 0:00:02.802 *** 2025-09-10 00:46:05.159589 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.159599 | orchestrator | 2025-09-10 00:46:05.159608 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-10 00:46:05.159618 | orchestrator | Wednesday 10 September 2025 00:45:13 +0000 (0:00:01.846) 0:00:04.648 *** 2025-09-10 00:46:05.159627 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-10 00:46:05.159637 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:05.159646 | orchestrator | 2025-09-10 00:46:05.159656 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-10 00:46:05.159665 | orchestrator | Wednesday 10 September 2025 00:45:59 +0000 (0:00:45.568) 0:00:50.217 *** 2025-09-10 00:46:05.159675 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:05.159684 | orchestrator | 2025-09-10 00:46:05.159694 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:46:05.159704 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:05.159720 | orchestrator | 2025-09-10 00:46:05.159730 | orchestrator | 2025-09-10 00:46:05.159739 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:46:05.159757 | orchestrator | Wednesday 10 September 2025 00:46:03 +0000 (0:00:04.408) 0:00:54.626 *** 2025-09-10 00:46:05.159767 | orchestrator | =============================================================================== 2025-09-10 00:46:05.159777 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.57s 2025-09-10 00:46:05.159786 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.41s 2025-09-10 00:46:05.159796 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.89s 2025-09-10 00:46:05.159805 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.85s 2025-09-10 00:46:05.159815 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.72s 2025-09-10 00:46:05.159824 | orchestrator | 2025-09-10 00:46:05 | INFO  | Task f396ab6b-5bd0-4f47-90cc-dd1ba7b6c088 is in state SUCCESS 2025-09-10 00:46:05.160211 | orchestrator | 2025-09-10 00:46:05 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:05.162071 | orchestrator | 2025-09-10 00:46:05 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:05.163448 | orchestrator | 2025-09-10 00:46:05 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:05.166045 | orchestrator | 2025-09-10 00:46:05 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state STARTED 2025-09-10 00:46:05.166070 | orchestrator | 2025-09-10 00:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:08.235161 | orchestrator | 2025-09-10 00:46:08 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:08.235249 | orchestrator | 2025-09-10 00:46:08 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:08.235263 | orchestrator | 2025-09-10 00:46:08 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:08.235275 | orchestrator | 2025-09-10 00:46:08 | INFO  | Task 5055cd45-586a-4b2a-bc27-cfae325935ec is in state SUCCESS 2025-09-10 00:46:08.235286 | orchestrator | 2025-09-10 00:46:08.235298 | orchestrator | 2025-09-10 00:46:08.235309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:46:08.235320 | orchestrator | 2025-09-10 00:46:08.235331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:46:08.235342 | orchestrator | Wednesday 10 September 2025 00:44:52 +0000 (0:00:00.622) 0:00:00.622 *** 2025-09-10 00:46:08.235354 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-10 00:46:08.235372 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-10 00:46:08.235383 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-10 00:46:08.235394 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-10 00:46:08.235405 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-10 00:46:08.235415 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-10 00:46:08.235426 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-10 00:46:08.235437 | orchestrator | 2025-09-10 00:46:08.235448 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-10 00:46:08.235458 | orchestrator | 2025-09-10 00:46:08.235469 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-10 00:46:08.235480 | orchestrator | Wednesday 10 September 2025 00:44:55 +0000 (0:00:02.890) 0:00:03.512 *** 2025-09-10 00:46:08.235503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:46:08.235538 | orchestrator | 2025-09-10 00:46:08.235581 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-10 00:46:08.235593 | orchestrator | Wednesday 10 September 2025 00:44:56 +0000 (0:00:01.597) 0:00:05.110 *** 2025-09-10 00:46:08.235603 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:08.235615 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:46:08.235626 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:46:08.235637 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:46:08.235648 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:46:08.235665 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:46:08.235683 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:46:08.235699 | orchestrator | 2025-09-10 00:46:08.235718 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-10 00:46:08.235735 | orchestrator | Wednesday 10 September 2025 00:44:58 +0000 (0:00:01.673) 0:00:06.783 *** 2025-09-10 00:46:08.235752 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:46:08.235770 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:46:08.235787 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:46:08.235802 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:46:08.235819 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:46:08.235837 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:08.235854 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:46:08.235871 | orchestrator | 2025-09-10 00:46:08.235889 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-10 00:46:08.235907 | orchestrator | Wednesday 10 September 2025 00:45:02 +0000 (0:00:03.599) 0:00:10.383 *** 2025-09-10 00:46:08.235926 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:46:08.235947 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:46:08.235966 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:46:08.235983 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:46:08.235997 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:46:08.236010 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:46:08.236022 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:08.236035 | orchestrator | 2025-09-10 00:46:08.236047 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-10 00:46:08.236060 | orchestrator | Wednesday 10 September 2025 00:45:04 +0000 (0:00:02.505) 0:00:12.888 *** 2025-09-10 00:46:08.236073 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:46:08.236085 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:46:08.236097 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:46:08.236107 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:46:08.236117 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:46:08.236128 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:46:08.236139 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:08.236149 | orchestrator | 2025-09-10 00:46:08.236160 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-10 00:46:08.236170 | orchestrator | Wednesday 10 September 2025 00:45:16 +0000 (0:00:12.249) 0:00:25.138 *** 2025-09-10 00:46:08.236181 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:46:08.236192 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:46:08.236202 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:46:08.236213 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:46:08.236223 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:46:08.236234 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:46:08.236244 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:08.236255 | orchestrator | 2025-09-10 00:46:08.236266 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-10 00:46:08.236276 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:25.947) 0:00:51.085 *** 2025-09-10 00:46:08.236306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:46:08.236319 | orchestrator | 2025-09-10 00:46:08.236341 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-10 00:46:08.236352 | orchestrator | Wednesday 10 September 2025 00:45:44 +0000 (0:00:01.612) 0:00:52.698 *** 2025-09-10 00:46:08.236363 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-10 00:46:08.236374 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-10 00:46:08.236385 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-10 00:46:08.236395 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-10 00:46:08.236406 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-10 00:46:08.236416 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-10 00:46:08.236427 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-10 00:46:08.236437 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-10 00:46:08.236448 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-10 00:46:08.236459 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-10 00:46:08.236469 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-10 00:46:08.236480 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-10 00:46:08.236490 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-10 00:46:08.236501 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-10 00:46:08.236511 | orchestrator | 2025-09-10 00:46:08.236522 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-10 00:46:08.236533 | orchestrator | Wednesday 10 September 2025 00:45:51 +0000 (0:00:06.513) 0:00:59.212 *** 2025-09-10 00:46:08.236562 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:08.236573 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:46:08.236584 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:46:08.236595 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:46:08.236605 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:46:08.236616 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:46:08.236626 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:46:08.236637 | orchestrator | 2025-09-10 00:46:08.236648 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-10 00:46:08.236658 | orchestrator | Wednesday 10 September 2025 00:45:52 +0000 (0:00:01.301) 0:01:00.513 *** 2025-09-10 00:46:08.236669 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:08.236680 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:46:08.236690 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:46:08.236701 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:46:08.236712 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:46:08.236722 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:46:08.236733 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:46:08.236743 | orchestrator | 2025-09-10 00:46:08.236754 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-10 00:46:08.236765 | orchestrator | Wednesday 10 September 2025 00:45:54 +0000 (0:00:02.050) 0:01:02.563 *** 2025-09-10 00:46:08.236775 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:46:08.236786 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:46:08.236797 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:46:08.236807 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:46:08.236818 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:46:08.236828 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:46:08.236839 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:08.236849 | orchestrator | 2025-09-10 00:46:08.236860 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-10 00:46:08.236871 | orchestrator | Wednesday 10 September 2025 00:45:55 +0000 (0:00:01.563) 0:01:04.127 *** 2025-09-10 00:46:08.237048 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:46:08.237070 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:46:08.237081 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:46:08.237092 | orchestrator | ok: [testbed-manager] 2025-09-10 00:46:08.237102 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:46:08.237122 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:46:08.237133 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:46:08.237143 | orchestrator | 2025-09-10 00:46:08.237154 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-10 00:46:08.237165 | orchestrator | Wednesday 10 September 2025 00:45:58 +0000 (0:00:02.403) 0:01:06.531 *** 2025-09-10 00:46:08.237176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-10 00:46:08.237188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:46:08.237199 | orchestrator | 2025-09-10 00:46:08.237210 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-10 00:46:08.237221 | orchestrator | Wednesday 10 September 2025 00:46:00 +0000 (0:00:01.983) 0:01:08.514 *** 2025-09-10 00:46:08.237231 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:08.237242 | orchestrator | 2025-09-10 00:46:08.237252 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-10 00:46:08.237263 | orchestrator | Wednesday 10 September 2025 00:46:02 +0000 (0:00:02.469) 0:01:10.984 *** 2025-09-10 00:46:08.237273 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:46:08.237284 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:46:08.237294 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:46:08.237305 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:46:08.237315 | orchestrator | changed: [testbed-manager] 2025-09-10 00:46:08.237326 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:46:08.237336 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:46:08.237346 | orchestrator | 2025-09-10 00:46:08.237357 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:46:08.237368 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237390 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237402 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237413 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237424 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237434 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237445 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:46:08.237455 | orchestrator | 2025-09-10 00:46:08.237466 | orchestrator | 2025-09-10 00:46:08.237481 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:46:08.237492 | orchestrator | Wednesday 10 September 2025 00:46:06 +0000 (0:00:04.041) 0:01:15.025 *** 2025-09-10 00:46:08.237503 | orchestrator | =============================================================================== 2025-09-10 00:46:08.237514 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.95s 2025-09-10 00:46:08.237524 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.25s 2025-09-10 00:46:08.237535 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.51s 2025-09-10 00:46:08.237582 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.04s 2025-09-10 00:46:08.237593 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.60s 2025-09-10 00:46:08.237612 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.89s 2025-09-10 00:46:08.237622 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.51s 2025-09-10 00:46:08.237633 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.47s 2025-09-10 00:46:08.237644 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.40s 2025-09-10 00:46:08.237657 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.05s 2025-09-10 00:46:08.237669 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.98s 2025-09-10 00:46:08.237682 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.67s 2025-09-10 00:46:08.237694 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.61s 2025-09-10 00:46:08.237706 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.60s 2025-09-10 00:46:08.237718 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.56s 2025-09-10 00:46:08.237731 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2025-09-10 00:46:08.237743 | orchestrator | 2025-09-10 00:46:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:11.279400 | orchestrator | 2025-09-10 00:46:11 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:11.279930 | orchestrator | 2025-09-10 00:46:11 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:11.280692 | orchestrator | 2025-09-10 00:46:11 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:11.280716 | orchestrator | 2025-09-10 00:46:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:14.325966 | orchestrator | 2025-09-10 00:46:14 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:14.327455 | orchestrator | 2025-09-10 00:46:14 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:14.328887 | orchestrator | 2025-09-10 00:46:14 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:14.328962 | orchestrator | 2025-09-10 00:46:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:17.380127 | orchestrator | 2025-09-10 00:46:17 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:17.381808 | orchestrator | 2025-09-10 00:46:17 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:17.384290 | orchestrator | 2025-09-10 00:46:17 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:17.384574 | orchestrator | 2025-09-10 00:46:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:20.428230 | orchestrator | 2025-09-10 00:46:20 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:20.428908 | orchestrator | 2025-09-10 00:46:20 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:20.431035 | orchestrator | 2025-09-10 00:46:20 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:20.431324 | orchestrator | 2025-09-10 00:46:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:23.481809 | orchestrator | 2025-09-10 00:46:23 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:23.481926 | orchestrator | 2025-09-10 00:46:23 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:23.482638 | orchestrator | 2025-09-10 00:46:23 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:23.482689 | orchestrator | 2025-09-10 00:46:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:26.559577 | orchestrator | 2025-09-10 00:46:26 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:26.561016 | orchestrator | 2025-09-10 00:46:26 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:26.562156 | orchestrator | 2025-09-10 00:46:26 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:26.562230 | orchestrator | 2025-09-10 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:29.628527 | orchestrator | 2025-09-10 00:46:29 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:29.630198 | orchestrator | 2025-09-10 00:46:29 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:29.633027 | orchestrator | 2025-09-10 00:46:29 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:29.633263 | orchestrator | 2025-09-10 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:32.687479 | orchestrator | 2025-09-10 00:46:32 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:32.692720 | orchestrator | 2025-09-10 00:46:32 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:32.702082 | orchestrator | 2025-09-10 00:46:32 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:32.704010 | orchestrator | 2025-09-10 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:35.756348 | orchestrator | 2025-09-10 00:46:35 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:35.757788 | orchestrator | 2025-09-10 00:46:35 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:35.760592 | orchestrator | 2025-09-10 00:46:35 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:35.761480 | orchestrator | 2025-09-10 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:38.810962 | orchestrator | 2025-09-10 00:46:38 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:38.812761 | orchestrator | 2025-09-10 00:46:38 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:38.813886 | orchestrator | 2025-09-10 00:46:38 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:38.813908 | orchestrator | 2025-09-10 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:41.863928 | orchestrator | 2025-09-10 00:46:41 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:41.866237 | orchestrator | 2025-09-10 00:46:41 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:41.866267 | orchestrator | 2025-09-10 00:46:41 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:41.866280 | orchestrator | 2025-09-10 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:44.911796 | orchestrator | 2025-09-10 00:46:44 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:44.912632 | orchestrator | 2025-09-10 00:46:44 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:44.914439 | orchestrator | 2025-09-10 00:46:44 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:44.915142 | orchestrator | 2025-09-10 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:47.962153 | orchestrator | 2025-09-10 00:46:47 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:47.965172 | orchestrator | 2025-09-10 00:46:47 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:47.967949 | orchestrator | 2025-09-10 00:46:47 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:47.967971 | orchestrator | 2025-09-10 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:51.011060 | orchestrator | 2025-09-10 00:46:51 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:51.013210 | orchestrator | 2025-09-10 00:46:51 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:51.013949 | orchestrator | 2025-09-10 00:46:51 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:51.013975 | orchestrator | 2025-09-10 00:46:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:54.051369 | orchestrator | 2025-09-10 00:46:54 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:54.052828 | orchestrator | 2025-09-10 00:46:54 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:54.055682 | orchestrator | 2025-09-10 00:46:54 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:54.055717 | orchestrator | 2025-09-10 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:46:57.089741 | orchestrator | 2025-09-10 00:46:57 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:46:57.090692 | orchestrator | 2025-09-10 00:46:57 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:46:57.092821 | orchestrator | 2025-09-10 00:46:57 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:46:57.093092 | orchestrator | 2025-09-10 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:00.144957 | orchestrator | 2025-09-10 00:47:00 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:47:00.149687 | orchestrator | 2025-09-10 00:47:00 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:00.151078 | orchestrator | 2025-09-10 00:47:00 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:00.151105 | orchestrator | 2025-09-10 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:03.195797 | orchestrator | 2025-09-10 00:47:03 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:47:03.198837 | orchestrator | 2025-09-10 00:47:03 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:03.203995 | orchestrator | 2025-09-10 00:47:03 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:03.204021 | orchestrator | 2025-09-10 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:06.247724 | orchestrator | 2025-09-10 00:47:06 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:47:06.250306 | orchestrator | 2025-09-10 00:47:06 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:06.253535 | orchestrator | 2025-09-10 00:47:06 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:06.253562 | orchestrator | 2025-09-10 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:09.296060 | orchestrator | 2025-09-10 00:47:09 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:47:09.296590 | orchestrator | 2025-09-10 00:47:09 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:09.299231 | orchestrator | 2025-09-10 00:47:09 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:09.299263 | orchestrator | 2025-09-10 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:12.349266 | orchestrator | 2025-09-10 00:47:12 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state STARTED 2025-09-10 00:47:12.350613 | orchestrator | 2025-09-10 00:47:12 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:12.354620 | orchestrator | 2025-09-10 00:47:12 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:12.354659 | orchestrator | 2025-09-10 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:15.393749 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state STARTED 2025-09-10 00:47:15.396814 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task cb5f09db-aacf-442d-abca-5678f317285f is in state SUCCESS 2025-09-10 00:47:15.400054 | orchestrator | 2025-09-10 00:47:15.400101 | orchestrator | 2025-09-10 00:47:15.400114 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-10 00:47:15.400126 | orchestrator | 2025-09-10 00:47:15.400137 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-10 00:47:15.400149 | orchestrator | Wednesday 10 September 2025 00:44:45 +0000 (0:00:00.265) 0:00:00.265 *** 2025-09-10 00:47:15.400160 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:47:15.400172 | orchestrator | 2025-09-10 00:47:15.400183 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-10 00:47:15.400194 | orchestrator | Wednesday 10 September 2025 00:44:46 +0000 (0:00:01.340) 0:00:01.605 *** 2025-09-10 00:47:15.400204 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400215 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400226 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400243 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400254 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400265 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400276 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400287 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400298 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400308 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400319 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400330 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-10 00:47:15.400340 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400351 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400362 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400372 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400423 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-10 00:47:15.400436 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400447 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400458 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400469 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-10 00:47:15.400481 | orchestrator | 2025-09-10 00:47:15.400492 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-10 00:47:15.400546 | orchestrator | Wednesday 10 September 2025 00:44:50 +0000 (0:00:03.998) 0:00:05.604 *** 2025-09-10 00:47:15.400566 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:47:15.400585 | orchestrator | 2025-09-10 00:47:15.400602 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-10 00:47:15.400631 | orchestrator | Wednesday 10 September 2025 00:44:51 +0000 (0:00:01.491) 0:00:07.095 *** 2025-09-10 00:47:15.400659 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400790 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.400891 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.400998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.401016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.401050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.401068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.401096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.401126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.401147 | orchestrator | 2025-09-10 00:47:15.401166 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-10 00:47:15.401186 | orchestrator | Wednesday 10 September 2025 00:44:57 +0000 (0:00:05.146) 0:00:12.242 *** 2025-09-10 00:47:15.401202 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401230 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401241 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:47:15.401253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401308 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:47:15.401323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401357 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:47:15.401368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401402 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:47:15.401418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401487 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:47:15.401543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401578 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:47:15.401589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401637 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:47:15.401648 | orchestrator | 2025-09-10 00:47:15.401658 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-10 00:47:15.401669 | orchestrator | Wednesday 10 September 2025 00:44:58 +0000 (0:00:01.322) 0:00:13.564 *** 2025-09-10 00:47:15.401685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401719 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401731 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401742 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401753 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:47:15.401768 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:47:15.401792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401863 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:47:15.401874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401918 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:47:15.401928 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:47:15.401943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.401955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.401977 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:47:15.401988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-10 00:47:15.402000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.402108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.402132 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:47:15.402143 | orchestrator | 2025-09-10 00:47:15.402154 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-10 00:47:15.402165 | orchestrator | Wednesday 10 September 2025 00:45:01 +0000 (0:00:03.023) 0:00:16.587 *** 2025-09-10 00:47:15.402176 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:47:15.402186 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:47:15.402197 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:47:15.402208 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:47:15.402218 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:47:15.402235 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:47:15.402246 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:47:15.402257 | orchestrator | 2025-09-10 00:47:15.402267 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-10 00:47:15.402278 | orchestrator | Wednesday 10 September 2025 00:45:02 +0000 (0:00:00.803) 0:00:17.390 *** 2025-09-10 00:47:15.402289 | orchestrator | skipping: [testbed-manager] 2025-09-10 00:47:15.402299 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:47:15.402309 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:47:15.402320 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:47:15.402330 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:47:15.402340 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:47:15.402351 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:47:15.402361 | orchestrator | 2025-09-10 00:47:15.402371 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-10 00:47:15.402382 | orchestrator | Wednesday 10 September 2025 00:45:03 +0000 (0:00:00.866) 0:00:18.257 *** 2025-09-10 00:47:15.402398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402541 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.402564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402643 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402682 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.402693 | orchestrator | 2025-09-10 00:47:15.402704 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-10 00:47:15.402715 | orchestrator | Wednesday 10 September 2025 00:45:11 +0000 (0:00:08.596) 0:00:26.853 *** 2025-09-10 00:47:15.402726 | orchestrator | [WARNING]: Skipped 2025-09-10 00:47:15.402737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-10 00:47:15.402748 | orchestrator | to this access issue: 2025-09-10 00:47:15.402759 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-10 00:47:15.402769 | orchestrator | directory 2025-09-10 00:47:15.402780 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:47:15.402791 | orchestrator | 2025-09-10 00:47:15.402802 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-10 00:47:15.402812 | orchestrator | Wednesday 10 September 2025 00:45:12 +0000 (0:00:01.270) 0:00:28.124 *** 2025-09-10 00:47:15.402823 | orchestrator | [WARNING]: Skipped 2025-09-10 00:47:15.402834 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-10 00:47:15.402849 | orchestrator | to this access issue: 2025-09-10 00:47:15.402860 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-10 00:47:15.402871 | orchestrator | directory 2025-09-10 00:47:15.402881 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:47:15.402892 | orchestrator | 2025-09-10 00:47:15.402902 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-10 00:47:15.402913 | orchestrator | Wednesday 10 September 2025 00:45:14 +0000 (0:00:01.267) 0:00:29.392 *** 2025-09-10 00:47:15.402924 | orchestrator | [WARNING]: Skipped 2025-09-10 00:47:15.402934 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-10 00:47:15.402945 | orchestrator | to this access issue: 2025-09-10 00:47:15.402955 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-10 00:47:15.402966 | orchestrator | directory 2025-09-10 00:47:15.402977 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:47:15.402987 | orchestrator | 2025-09-10 00:47:15.402998 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-10 00:47:15.403008 | orchestrator | Wednesday 10 September 2025 00:45:14 +0000 (0:00:00.651) 0:00:30.044 *** 2025-09-10 00:47:15.403019 | orchestrator | [WARNING]: Skipped 2025-09-10 00:47:15.403033 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-10 00:47:15.403044 | orchestrator | to this access issue: 2025-09-10 00:47:15.403055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-10 00:47:15.403075 | orchestrator | directory 2025-09-10 00:47:15.403086 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 00:47:15.403096 | orchestrator | 2025-09-10 00:47:15.403107 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-10 00:47:15.403117 | orchestrator | Wednesday 10 September 2025 00:45:15 +0000 (0:00:00.789) 0:00:30.833 *** 2025-09-10 00:47:15.403128 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.403138 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.403149 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.403159 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.403170 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.403181 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.403191 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.403201 | orchestrator | 2025-09-10 00:47:15.403212 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-10 00:47:15.403223 | orchestrator | Wednesday 10 September 2025 00:45:20 +0000 (0:00:04.458) 0:00:35.292 *** 2025-09-10 00:47:15.403233 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403266 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403276 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403287 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403298 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-10 00:47:15.403308 | orchestrator | 2025-09-10 00:47:15.403320 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-10 00:47:15.403338 | orchestrator | Wednesday 10 September 2025 00:45:24 +0000 (0:00:04.554) 0:00:39.847 *** 2025-09-10 00:47:15.403357 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.403388 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.403407 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.403425 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.403444 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.403461 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.403480 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.403519 | orchestrator | 2025-09-10 00:47:15.403539 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-10 00:47:15.403557 | orchestrator | Wednesday 10 September 2025 00:45:27 +0000 (0:00:02.990) 0:00:42.837 *** 2025-09-10 00:47:15.403578 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403636 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403687 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403699 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403710 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403744 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403759 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403793 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403815 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.403842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:47:15.403854 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403870 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.403892 | orchestrator | 2025-09-10 00:47:15.403903 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-10 00:47:15.403914 | orchestrator | Wednesday 10 September 2025 00:45:30 +0000 (0:00:02.943) 0:00:45.780 *** 2025-09-10 00:47:15.403924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403946 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403957 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403967 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403978 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403988 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-10 00:47:15.403999 | orchestrator | 2025-09-10 00:47:15.404009 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-10 00:47:15.404020 | orchestrator | Wednesday 10 September 2025 00:45:33 +0000 (0:00:02.549) 0:00:48.330 *** 2025-09-10 00:47:15.404031 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404041 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404052 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404062 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404078 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404089 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404099 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-10 00:47:15.404110 | orchestrator | 2025-09-10 00:47:15.404121 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-10 00:47:15.404131 | orchestrator | Wednesday 10 September 2025 00:45:35 +0000 (0:00:02.249) 0:00:50.579 *** 2025-09-10 00:47:15.404142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404175 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-10 00:47:15.404275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404290 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404381 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:47:15.404424 | orchestrator | 2025-09-10 00:47:15.404435 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-10 00:47:15.404446 | orchestrator | Wednesday 10 September 2025 00:45:39 +0000 (0:00:03.670) 0:00:54.250 *** 2025-09-10 00:47:15.404463 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.404473 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.404484 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.404609 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.404625 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.404636 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.404647 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.404657 | orchestrator | 2025-09-10 00:47:15.404668 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-10 00:47:15.404678 | orchestrator | Wednesday 10 September 2025 00:45:40 +0000 (0:00:01.615) 0:00:55.865 *** 2025-09-10 00:47:15.404689 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.404700 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.404711 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.404721 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.404732 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.404742 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.404753 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.404763 | orchestrator | 2025-09-10 00:47:15.404774 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404784 | orchestrator | Wednesday 10 September 2025 00:45:41 +0000 (0:00:01.240) 0:00:57.106 *** 2025-09-10 00:47:15.404795 | orchestrator | 2025-09-10 00:47:15.404806 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404816 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.058) 0:00:57.165 *** 2025-09-10 00:47:15.404827 | orchestrator | 2025-09-10 00:47:15.404837 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404848 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.068) 0:00:57.233 *** 2025-09-10 00:47:15.404859 | orchestrator | 2025-09-10 00:47:15.404869 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404880 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.062) 0:00:57.296 *** 2025-09-10 00:47:15.404890 | orchestrator | 2025-09-10 00:47:15.404901 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404911 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.296) 0:00:57.593 *** 2025-09-10 00:47:15.404922 | orchestrator | 2025-09-10 00:47:15.404932 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404942 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.069) 0:00:57.663 *** 2025-09-10 00:47:15.404953 | orchestrator | 2025-09-10 00:47:15.404963 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-10 00:47:15.404974 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.065) 0:00:57.729 *** 2025-09-10 00:47:15.404984 | orchestrator | 2025-09-10 00:47:15.404995 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-10 00:47:15.405013 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:00.091) 0:00:57.820 *** 2025-09-10 00:47:15.405024 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.405035 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.405046 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.405056 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.405067 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.405078 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.405088 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.405099 | orchestrator | 2025-09-10 00:47:15.405108 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-10 00:47:15.405118 | orchestrator | Wednesday 10 September 2025 00:46:23 +0000 (0:00:40.363) 0:01:38.184 *** 2025-09-10 00:47:15.405127 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.405137 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.405146 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.405164 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.405173 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.405183 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.405192 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.405201 | orchestrator | 2025-09-10 00:47:15.405216 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-10 00:47:15.405233 | orchestrator | Wednesday 10 September 2025 00:47:05 +0000 (0:00:42.344) 0:02:20.529 *** 2025-09-10 00:47:15.405243 | orchestrator | ok: [testbed-manager] 2025-09-10 00:47:15.405253 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:47:15.405263 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:47:15.405272 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:47:15.405282 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:47:15.405291 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:47:15.405300 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:47:15.405310 | orchestrator | 2025-09-10 00:47:15.405319 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-10 00:47:15.405329 | orchestrator | Wednesday 10 September 2025 00:47:07 +0000 (0:00:02.151) 0:02:22.680 *** 2025-09-10 00:47:15.405338 | orchestrator | changed: [testbed-manager] 2025-09-10 00:47:15.405348 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:15.405357 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:15.405367 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:47:15.405376 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:15.405385 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:47:15.405395 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:47:15.405404 | orchestrator | 2025-09-10 00:47:15.405413 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:47:15.405423 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405433 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405443 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405453 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405462 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405472 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405481 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-10 00:47:15.405490 | orchestrator | 2025-09-10 00:47:15.405517 | orchestrator | 2025-09-10 00:47:15.405526 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:47:15.405536 | orchestrator | Wednesday 10 September 2025 00:47:12 +0000 (0:00:04.735) 0:02:27.416 *** 2025-09-10 00:47:15.405546 | orchestrator | =============================================================================== 2025-09-10 00:47:15.405556 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.34s 2025-09-10 00:47:15.405565 | orchestrator | common : Restart fluentd container ------------------------------------- 40.36s 2025-09-10 00:47:15.405575 | orchestrator | common : Copying over config.json files for services -------------------- 8.60s 2025-09-10 00:47:15.405584 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.15s 2025-09-10 00:47:15.405594 | orchestrator | common : Restart cron container ----------------------------------------- 4.74s 2025-09-10 00:47:15.405609 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.55s 2025-09-10 00:47:15.405619 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.46s 2025-09-10 00:47:15.405629 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.00s 2025-09-10 00:47:15.405638 | orchestrator | common : Check common containers ---------------------------------------- 3.67s 2025-09-10 00:47:15.405648 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.02s 2025-09-10 00:47:15.405657 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.99s 2025-09-10 00:47:15.405666 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.94s 2025-09-10 00:47:15.405676 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.55s 2025-09-10 00:47:15.405685 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.25s 2025-09-10 00:47:15.405700 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.15s 2025-09-10 00:47:15.405710 | orchestrator | common : Creating log volume -------------------------------------------- 1.62s 2025-09-10 00:47:15.405719 | orchestrator | common : include_tasks -------------------------------------------------- 1.49s 2025-09-10 00:47:15.405728 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2025-09-10 00:47:15.405738 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.32s 2025-09-10 00:47:15.405747 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.27s 2025-09-10 00:47:15.405757 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:15.405767 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:15.405776 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:15.405790 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:15.405912 | orchestrator | 2025-09-10 00:47:15 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:15.405928 | orchestrator | 2025-09-10 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:18.431238 | orchestrator | 2025-09-10 00:47:18 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state STARTED 2025-09-10 00:47:18.431691 | orchestrator | 2025-09-10 00:47:18 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:18.432256 | orchestrator | 2025-09-10 00:47:18 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:18.435013 | orchestrator | 2025-09-10 00:47:18 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:18.435615 | orchestrator | 2025-09-10 00:47:18 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:18.436345 | orchestrator | 2025-09-10 00:47:18 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:18.437222 | orchestrator | 2025-09-10 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:21.462430 | orchestrator | 2025-09-10 00:47:21 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state STARTED 2025-09-10 00:47:21.464435 | orchestrator | 2025-09-10 00:47:21 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:21.464463 | orchestrator | 2025-09-10 00:47:21 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:21.465342 | orchestrator | 2025-09-10 00:47:21 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:21.465957 | orchestrator | 2025-09-10 00:47:21 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:21.466507 | orchestrator | 2025-09-10 00:47:21 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:21.466531 | orchestrator | 2025-09-10 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:24.540054 | orchestrator | 2025-09-10 00:47:24 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state STARTED 2025-09-10 00:47:24.540646 | orchestrator | 2025-09-10 00:47:24 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:24.541226 | orchestrator | 2025-09-10 00:47:24 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:24.542222 | orchestrator | 2025-09-10 00:47:24 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:24.542695 | orchestrator | 2025-09-10 00:47:24 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:24.543479 | orchestrator | 2025-09-10 00:47:24 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:24.543521 | orchestrator | 2025-09-10 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:27.615442 | orchestrator | 2025-09-10 00:47:27 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state STARTED 2025-09-10 00:47:27.615569 | orchestrator | 2025-09-10 00:47:27 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:27.617686 | orchestrator | 2025-09-10 00:47:27 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:27.625452 | orchestrator | 2025-09-10 00:47:27 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:27.668876 | orchestrator | 2025-09-10 00:47:27 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:27.668950 | orchestrator | 2025-09-10 00:47:27 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:27.668965 | orchestrator | 2025-09-10 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:30.744956 | orchestrator | 2025-09-10 00:47:30 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state STARTED 2025-09-10 00:47:30.745042 | orchestrator | 2025-09-10 00:47:30 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:30.745057 | orchestrator | 2025-09-10 00:47:30 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:30.745069 | orchestrator | 2025-09-10 00:47:30 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:30.745096 | orchestrator | 2025-09-10 00:47:30 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:30.745108 | orchestrator | 2025-09-10 00:47:30 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:30.745120 | orchestrator | 2025-09-10 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:33.774932 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task d678e451-bfb8-4a54-a5d6-5bebe65b2178 is in state SUCCESS 2025-09-10 00:47:33.775023 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:33.775036 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:33.775046 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:33.775081 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:33.777177 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:33.777197 | orchestrator | 2025-09-10 00:47:33 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:33.777207 | orchestrator | 2025-09-10 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:36.874883 | orchestrator | 2025-09-10 00:47:36 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:36.874982 | orchestrator | 2025-09-10 00:47:36 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:36.877901 | orchestrator | 2025-09-10 00:47:36 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:36.877953 | orchestrator | 2025-09-10 00:47:36 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:36.877966 | orchestrator | 2025-09-10 00:47:36 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:36.880473 | orchestrator | 2025-09-10 00:47:36 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:36.880539 | orchestrator | 2025-09-10 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:39.924823 | orchestrator | 2025-09-10 00:47:39 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:39.924924 | orchestrator | 2025-09-10 00:47:39 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:39.924939 | orchestrator | 2025-09-10 00:47:39 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:39.924950 | orchestrator | 2025-09-10 00:47:39 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state STARTED 2025-09-10 00:47:39.924961 | orchestrator | 2025-09-10 00:47:39 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:39.924972 | orchestrator | 2025-09-10 00:47:39 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:39.924983 | orchestrator | 2025-09-10 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:42.967245 | orchestrator | 2025-09-10 00:47:42 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:42.967326 | orchestrator | 2025-09-10 00:47:42 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:42.967542 | orchestrator | 2025-09-10 00:47:42 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:42.968583 | orchestrator | 2025-09-10 00:47:42 | INFO  | Task 852cc65b-e3c4-42e1-a4cd-2a1fc224e3d1 is in state SUCCESS 2025-09-10 00:47:42.970323 | orchestrator | 2025-09-10 00:47:42.970365 | orchestrator | 2025-09-10 00:47:42.970377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:47:42.970389 | orchestrator | 2025-09-10 00:47:42.970399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:47:42.970411 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.301) 0:00:00.301 *** 2025-09-10 00:47:42.970422 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:47:42.970433 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:47:42.970444 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:47:42.970455 | orchestrator | 2025-09-10 00:47:42.970465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:47:42.970518 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.293) 0:00:00.595 *** 2025-09-10 00:47:42.970534 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-10 00:47:42.970563 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-10 00:47:42.970574 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-10 00:47:42.970585 | orchestrator | 2025-09-10 00:47:42.970603 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-10 00:47:42.970614 | orchestrator | 2025-09-10 00:47:42.970625 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-10 00:47:42.970635 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.490) 0:00:01.086 *** 2025-09-10 00:47:42.970646 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:47:42.970657 | orchestrator | 2025-09-10 00:47:42.970668 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-10 00:47:42.970679 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.497) 0:00:01.583 *** 2025-09-10 00:47:42.970690 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-10 00:47:42.970700 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-10 00:47:42.970711 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-10 00:47:42.970721 | orchestrator | 2025-09-10 00:47:42.970732 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-10 00:47:42.970742 | orchestrator | Wednesday 10 September 2025 00:47:19 +0000 (0:00:00.754) 0:00:02.338 *** 2025-09-10 00:47:42.970754 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-10 00:47:42.970773 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-10 00:47:42.970791 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-10 00:47:42.970810 | orchestrator | 2025-09-10 00:47:42.970827 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-10 00:47:42.970845 | orchestrator | Wednesday 10 September 2025 00:47:21 +0000 (0:00:01.875) 0:00:04.213 *** 2025-09-10 00:47:42.970863 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:42.970880 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:42.970909 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:42.970927 | orchestrator | 2025-09-10 00:47:42.970946 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-10 00:47:42.970965 | orchestrator | Wednesday 10 September 2025 00:47:23 +0000 (0:00:02.058) 0:00:06.272 *** 2025-09-10 00:47:42.970984 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:42.971003 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:42.971022 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:42.971033 | orchestrator | 2025-09-10 00:47:42.971044 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:47:42.971055 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:47:42.971067 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:47:42.971077 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:47:42.971088 | orchestrator | 2025-09-10 00:47:42.971098 | orchestrator | 2025-09-10 00:47:42.971109 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:47:42.971120 | orchestrator | Wednesday 10 September 2025 00:47:31 +0000 (0:00:07.703) 0:00:13.975 *** 2025-09-10 00:47:42.971131 | orchestrator | =============================================================================== 2025-09-10 00:47:42.971141 | orchestrator | memcached : Restart memcached container --------------------------------- 7.70s 2025-09-10 00:47:42.971152 | orchestrator | memcached : Check memcached container ----------------------------------- 2.06s 2025-09-10 00:47:42.971162 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.88s 2025-09-10 00:47:42.971184 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.75s 2025-09-10 00:47:42.971195 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2025-09-10 00:47:42.971205 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-09-10 00:47:42.971221 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-09-10 00:47:42.971240 | orchestrator | 2025-09-10 00:47:42.971258 | orchestrator | 2025-09-10 00:47:42.971275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:47:42.971293 | orchestrator | 2025-09-10 00:47:42.971310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:47:42.971330 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.285) 0:00:00.286 *** 2025-09-10 00:47:42.971349 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:47:42.971368 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:47:42.971379 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:47:42.971390 | orchestrator | 2025-09-10 00:47:42.971401 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:47:42.971426 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.351) 0:00:00.637 *** 2025-09-10 00:47:42.971437 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-10 00:47:42.971448 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-10 00:47:42.971458 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-10 00:47:42.971469 | orchestrator | 2025-09-10 00:47:42.971501 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-10 00:47:42.971512 | orchestrator | 2025-09-10 00:47:42.971523 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-10 00:47:42.971533 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.419) 0:00:01.057 *** 2025-09-10 00:47:42.971544 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:47:42.971555 | orchestrator | 2025-09-10 00:47:42.971565 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-10 00:47:42.971576 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.445) 0:00:01.502 *** 2025-09-10 00:47:42.971597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971690 | orchestrator | 2025-09-10 00:47:42.971701 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-10 00:47:42.971712 | orchestrator | Wednesday 10 September 2025 00:47:19 +0000 (0:00:01.255) 0:00:02.757 *** 2025-09-10 00:47:42.971728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971810 | orchestrator | 2025-09-10 00:47:42.971821 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-10 00:47:42.971832 | orchestrator | Wednesday 10 September 2025 00:47:22 +0000 (0:00:02.716) 0:00:05.474 *** 2025-09-10 00:47:42.971848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971924 | orchestrator | 2025-09-10 00:47:42.971941 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-10 00:47:42.971952 | orchestrator | Wednesday 10 September 2025 00:47:25 +0000 (0:00:02.834) 0:00:08.309 *** 2025-09-10 00:47:42.971963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.971989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.972007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.972019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.972030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-10 00:47:42.972041 | orchestrator | 2025-09-10 00:47:42.972052 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-10 00:47:42.972062 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:01.873) 0:00:10.182 *** 2025-09-10 00:47:42.972073 | orchestrator | 2025-09-10 00:47:42.972084 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-10 00:47:42.972099 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:00.066) 0:00:10.248 *** 2025-09-10 00:47:42.972110 | orchestrator | 2025-09-10 00:47:42.972121 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-10 00:47:42.972131 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:00.096) 0:00:10.345 *** 2025-09-10 00:47:42.972142 | orchestrator | 2025-09-10 00:47:42.972152 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-10 00:47:42.972163 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:00.067) 0:00:10.412 *** 2025-09-10 00:47:42.972173 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:42.972184 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:42.972195 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:42.972205 | orchestrator | 2025-09-10 00:47:42.972216 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-10 00:47:42.972227 | orchestrator | Wednesday 10 September 2025 00:47:31 +0000 (0:00:04.267) 0:00:14.680 *** 2025-09-10 00:47:42.972237 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:47:42.972248 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:47:42.972263 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:47:42.972274 | orchestrator | 2025-09-10 00:47:42.972284 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:47:42.972295 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:47:42.972312 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:47:42.972323 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:47:42.972334 | orchestrator | 2025-09-10 00:47:42.972344 | orchestrator | 2025-09-10 00:47:42.972355 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:47:42.972365 | orchestrator | Wednesday 10 September 2025 00:47:40 +0000 (0:00:09.224) 0:00:23.905 *** 2025-09-10 00:47:42.972376 | orchestrator | =============================================================================== 2025-09-10 00:47:42.972386 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.22s 2025-09-10 00:47:42.972397 | orchestrator | redis : Restart redis container ----------------------------------------- 4.27s 2025-09-10 00:47:42.972407 | orchestrator | redis : Copying over redis config files --------------------------------- 2.83s 2025-09-10 00:47:42.972418 | orchestrator | redis : Copying over default config.json files -------------------------- 2.72s 2025-09-10 00:47:42.972428 | orchestrator | redis : Check redis containers ------------------------------------------ 1.87s 2025-09-10 00:47:42.972439 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.26s 2025-09-10 00:47:42.972449 | orchestrator | redis : include_tasks --------------------------------------------------- 0.45s 2025-09-10 00:47:42.972460 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-10 00:47:42.972470 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-09-10 00:47:42.972526 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2025-09-10 00:47:42.972538 | orchestrator | 2025-09-10 00:47:42 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:42.972549 | orchestrator | 2025-09-10 00:47:42 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:42.972560 | orchestrator | 2025-09-10 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:46.008366 | orchestrator | 2025-09-10 00:47:46 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:46.008449 | orchestrator | 2025-09-10 00:47:46 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:46.013378 | orchestrator | 2025-09-10 00:47:46 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:46.018104 | orchestrator | 2025-09-10 00:47:46 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:46.023919 | orchestrator | 2025-09-10 00:47:46 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:46.023958 | orchestrator | 2025-09-10 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:49.082848 | orchestrator | 2025-09-10 00:47:49 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:49.083384 | orchestrator | 2025-09-10 00:47:49 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:49.083716 | orchestrator | 2025-09-10 00:47:49 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:49.084315 | orchestrator | 2025-09-10 00:47:49 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:49.085367 | orchestrator | 2025-09-10 00:47:49 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:49.085390 | orchestrator | 2025-09-10 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:52.164132 | orchestrator | 2025-09-10 00:47:52 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:52.164714 | orchestrator | 2025-09-10 00:47:52 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:52.165512 | orchestrator | 2025-09-10 00:47:52 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:52.166345 | orchestrator | 2025-09-10 00:47:52 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:52.167040 | orchestrator | 2025-09-10 00:47:52 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:52.167281 | orchestrator | 2025-09-10 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:55.212587 | orchestrator | 2025-09-10 00:47:55 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:55.212679 | orchestrator | 2025-09-10 00:47:55 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:55.212694 | orchestrator | 2025-09-10 00:47:55 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:55.212706 | orchestrator | 2025-09-10 00:47:55 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:55.212717 | orchestrator | 2025-09-10 00:47:55 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:55.213526 | orchestrator | 2025-09-10 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:47:58.264431 | orchestrator | 2025-09-10 00:47:58 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:47:58.265213 | orchestrator | 2025-09-10 00:47:58 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:47:58.267608 | orchestrator | 2025-09-10 00:47:58 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:47:58.268831 | orchestrator | 2025-09-10 00:47:58 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:47:58.269971 | orchestrator | 2025-09-10 00:47:58 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:47:58.271173 | orchestrator | 2025-09-10 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:01.312145 | orchestrator | 2025-09-10 00:48:01 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:01.314747 | orchestrator | 2025-09-10 00:48:01 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:01.314781 | orchestrator | 2025-09-10 00:48:01 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:01.317204 | orchestrator | 2025-09-10 00:48:01 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:01.317227 | orchestrator | 2025-09-10 00:48:01 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:01.317239 | orchestrator | 2025-09-10 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:04.360894 | orchestrator | 2025-09-10 00:48:04 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:04.363434 | orchestrator | 2025-09-10 00:48:04 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:04.366699 | orchestrator | 2025-09-10 00:48:04 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:04.367593 | orchestrator | 2025-09-10 00:48:04 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:04.368416 | orchestrator | 2025-09-10 00:48:04 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:04.368712 | orchestrator | 2025-09-10 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:07.398575 | orchestrator | 2025-09-10 00:48:07 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:07.398979 | orchestrator | 2025-09-10 00:48:07 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:07.400007 | orchestrator | 2025-09-10 00:48:07 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:07.402392 | orchestrator | 2025-09-10 00:48:07 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:07.402419 | orchestrator | 2025-09-10 00:48:07 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:07.402431 | orchestrator | 2025-09-10 00:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:10.428036 | orchestrator | 2025-09-10 00:48:10 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:10.428347 | orchestrator | 2025-09-10 00:48:10 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:10.429305 | orchestrator | 2025-09-10 00:48:10 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:10.429966 | orchestrator | 2025-09-10 00:48:10 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:10.430783 | orchestrator | 2025-09-10 00:48:10 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:10.431004 | orchestrator | 2025-09-10 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:13.559130 | orchestrator | 2025-09-10 00:48:13 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:13.561000 | orchestrator | 2025-09-10 00:48:13 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:13.562101 | orchestrator | 2025-09-10 00:48:13 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:13.563665 | orchestrator | 2025-09-10 00:48:13 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:13.565313 | orchestrator | 2025-09-10 00:48:13 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:13.565337 | orchestrator | 2025-09-10 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:16.602921 | orchestrator | 2025-09-10 00:48:16 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:16.603071 | orchestrator | 2025-09-10 00:48:16 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:16.603539 | orchestrator | 2025-09-10 00:48:16 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:16.604178 | orchestrator | 2025-09-10 00:48:16 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:16.604639 | orchestrator | 2025-09-10 00:48:16 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:16.604660 | orchestrator | 2025-09-10 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:19.634760 | orchestrator | 2025-09-10 00:48:19 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:19.634958 | orchestrator | 2025-09-10 00:48:19 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:19.635768 | orchestrator | 2025-09-10 00:48:19 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:19.636398 | orchestrator | 2025-09-10 00:48:19 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state STARTED 2025-09-10 00:48:19.637220 | orchestrator | 2025-09-10 00:48:19 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:19.637243 | orchestrator | 2025-09-10 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:22.733080 | orchestrator | 2025-09-10 00:48:22 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:22.733182 | orchestrator | 2025-09-10 00:48:22 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:22.737199 | orchestrator | 2025-09-10 00:48:22 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:22.739085 | orchestrator | 2025-09-10 00:48:22 | INFO  | Task 691374fb-c000-4f9d-a74f-9796e1432f6d is in state SUCCESS 2025-09-10 00:48:22.740379 | orchestrator | 2025-09-10 00:48:22.740412 | orchestrator | 2025-09-10 00:48:22.740424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:48:22.740436 | orchestrator | 2025-09-10 00:48:22.740477 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:48:22.740489 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.265) 0:00:00.265 *** 2025-09-10 00:48:22.740499 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:22.740512 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:22.740523 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:22.740534 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:22.740545 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:22.740555 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:22.740566 | orchestrator | 2025-09-10 00:48:22.740578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:48:22.740589 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.710) 0:00:00.976 *** 2025-09-10 00:48:22.740600 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-10 00:48:22.740611 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-10 00:48:22.740622 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-10 00:48:22.740632 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-10 00:48:22.740643 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-10 00:48:22.740653 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-10 00:48:22.740664 | orchestrator | 2025-09-10 00:48:22.740675 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-10 00:48:22.740685 | orchestrator | 2025-09-10 00:48:22.740696 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-10 00:48:22.740706 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.728) 0:00:01.704 *** 2025-09-10 00:48:22.740736 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:48:22.740749 | orchestrator | 2025-09-10 00:48:22.740759 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-10 00:48:22.740770 | orchestrator | Wednesday 10 September 2025 00:47:19 +0000 (0:00:01.088) 0:00:02.793 *** 2025-09-10 00:48:22.740781 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-10 00:48:22.740792 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-10 00:48:22.740803 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-10 00:48:22.740813 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-10 00:48:22.740849 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-10 00:48:22.740861 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-10 00:48:22.740871 | orchestrator | 2025-09-10 00:48:22.740882 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-10 00:48:22.740893 | orchestrator | Wednesday 10 September 2025 00:47:21 +0000 (0:00:01.483) 0:00:04.276 *** 2025-09-10 00:48:22.740903 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-10 00:48:22.740914 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-10 00:48:22.740925 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-10 00:48:22.740936 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-10 00:48:22.740946 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-10 00:48:22.740957 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-10 00:48:22.740968 | orchestrator | 2025-09-10 00:48:22.740981 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-10 00:48:22.740994 | orchestrator | Wednesday 10 September 2025 00:47:23 +0000 (0:00:01.856) 0:00:06.132 *** 2025-09-10 00:48:22.741006 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-10 00:48:22.741018 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:22.741032 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-10 00:48:22.741044 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:22.741056 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-10 00:48:22.741068 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:22.741080 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-10 00:48:22.741093 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:22.741105 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-10 00:48:22.741117 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:22.741129 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-10 00:48:22.741142 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:22.741154 | orchestrator | 2025-09-10 00:48:22.741167 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-10 00:48:22.741179 | orchestrator | Wednesday 10 September 2025 00:47:24 +0000 (0:00:01.558) 0:00:07.690 *** 2025-09-10 00:48:22.741192 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:22.741204 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:22.741217 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:22.741229 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:22.741242 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:22.741254 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:22.741267 | orchestrator | 2025-09-10 00:48:22.741279 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-10 00:48:22.741291 | orchestrator | Wednesday 10 September 2025 00:47:25 +0000 (0:00:00.769) 0:00:08.460 *** 2025-09-10 00:48:22.741325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741528 | orchestrator | 2025-09-10 00:48:22.741539 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-10 00:48:22.741550 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:01.650) 0:00:10.110 *** 2025-09-10 00:48:22.741561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741752 | orchestrator | 2025-09-10 00:48:22.741763 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-10 00:48:22.741775 | orchestrator | Wednesday 10 September 2025 00:47:30 +0000 (0:00:03.499) 0:00:13.609 *** 2025-09-10 00:48:22.741786 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:22.741797 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:22.741808 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:22.741819 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:22.741830 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:22.741840 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:22.741851 | orchestrator | 2025-09-10 00:48:22.741862 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-10 00:48:22.741873 | orchestrator | Wednesday 10 September 2025 00:47:32 +0000 (0:00:01.319) 0:00:14.928 *** 2025-09-10 00:48:22.741889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.741998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.742009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-10 00:48:22.742085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.742098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-10 00:48:22.742109 | orchestrator | 2025-09-10 00:48:22.742120 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-10 00:48:22.742131 | orchestrator | Wednesday 10 September 2025 00:47:35 +0000 (0:00:03.706) 0:00:18.635 *** 2025-09-10 00:48:22.742142 | orchestrator | 2025-09-10 00:48:22.742154 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-10 00:48:22.742170 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:00.397) 0:00:19.032 *** 2025-09-10 00:48:22.742181 | orchestrator | 2025-09-10 00:48:22.742192 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-10 00:48:22.742202 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:00.132) 0:00:19.165 *** 2025-09-10 00:48:22.742213 | orchestrator | 2025-09-10 00:48:22.742224 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-10 00:48:22.742235 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:00.357) 0:00:19.523 *** 2025-09-10 00:48:22.742245 | orchestrator | 2025-09-10 00:48:22.742256 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-10 00:48:22.742267 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:00.351) 0:00:19.875 *** 2025-09-10 00:48:22.742278 | orchestrator | 2025-09-10 00:48:22.742288 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-10 00:48:22.742299 | orchestrator | Wednesday 10 September 2025 00:47:37 +0000 (0:00:00.135) 0:00:20.010 *** 2025-09-10 00:48:22.742309 | orchestrator | 2025-09-10 00:48:22.742320 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-10 00:48:22.742331 | orchestrator | Wednesday 10 September 2025 00:47:37 +0000 (0:00:00.204) 0:00:20.214 *** 2025-09-10 00:48:22.742342 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:22.742353 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:22.742363 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:22.742374 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:22.742385 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:22.742396 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:22.742406 | orchestrator | 2025-09-10 00:48:22.742417 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-10 00:48:22.742436 | orchestrator | Wednesday 10 September 2025 00:47:46 +0000 (0:00:08.873) 0:00:29.088 *** 2025-09-10 00:48:22.742464 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:22.742475 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:22.742486 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:22.742496 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:22.742507 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:22.742518 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:22.742529 | orchestrator | 2025-09-10 00:48:22.742539 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-10 00:48:22.742550 | orchestrator | Wednesday 10 September 2025 00:47:48 +0000 (0:00:01.959) 0:00:31.047 *** 2025-09-10 00:48:22.742561 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:22.742571 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:22.742582 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:22.742593 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:22.742604 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:22.742615 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:22.742625 | orchestrator | 2025-09-10 00:48:22.742636 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-10 00:48:22.742647 | orchestrator | Wednesday 10 September 2025 00:47:59 +0000 (0:00:10.999) 0:00:42.047 *** 2025-09-10 00:48:22.742658 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-10 00:48:22.742668 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-10 00:48:22.742679 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-10 00:48:22.742690 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-10 00:48:22.742701 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-10 00:48:22.742717 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-10 00:48:22.742728 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-10 00:48:22.742739 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-10 00:48:22.742750 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-10 00:48:22.742760 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-10 00:48:22.742771 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-10 00:48:22.742782 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-10 00:48:22.742793 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-10 00:48:22.742804 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-10 00:48:22.742814 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-10 00:48:22.742825 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-10 00:48:22.742836 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-10 00:48:22.742852 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-10 00:48:22.742863 | orchestrator | 2025-09-10 00:48:22.742880 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-10 00:48:22.742891 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:07.511) 0:00:49.558 *** 2025-09-10 00:48:22.742902 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-10 00:48:22.742913 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:22.742924 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-10 00:48:22.742935 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:22.742946 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-10 00:48:22.742956 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:22.742967 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-10 00:48:22.742978 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-10 00:48:22.742988 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-10 00:48:22.742999 | orchestrator | 2025-09-10 00:48:22.743010 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-10 00:48:22.743021 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:02.994) 0:00:52.553 *** 2025-09-10 00:48:22.743031 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-10 00:48:22.743042 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:22.743053 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-10 00:48:22.743064 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:22.743075 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-10 00:48:22.743086 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:22.743096 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-10 00:48:22.743107 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-10 00:48:22.743118 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-10 00:48:22.743129 | orchestrator | 2025-09-10 00:48:22.743139 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-10 00:48:22.743150 | orchestrator | Wednesday 10 September 2025 00:48:13 +0000 (0:00:03.527) 0:00:56.080 *** 2025-09-10 00:48:22.743161 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:22.743171 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:22.743182 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:22.743193 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:22.743204 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:22.743214 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:22.743225 | orchestrator | 2025-09-10 00:48:22.743236 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:48:22.743247 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:48:22.743259 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:48:22.743270 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:48:22.743280 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 00:48:22.743291 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 00:48:22.743307 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 00:48:22.743318 | orchestrator | 2025-09-10 00:48:22.743329 | orchestrator | 2025-09-10 00:48:22.743340 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:48:22.743351 | orchestrator | Wednesday 10 September 2025 00:48:21 +0000 (0:00:08.477) 0:01:04.558 *** 2025-09-10 00:48:22.743369 | orchestrator | =============================================================================== 2025-09-10 00:48:22.743380 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.48s 2025-09-10 00:48:22.743391 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.87s 2025-09-10 00:48:22.743401 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.51s 2025-09-10 00:48:22.743412 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.71s 2025-09-10 00:48:22.743422 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.53s 2025-09-10 00:48:22.743433 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.50s 2025-09-10 00:48:22.743460 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.99s 2025-09-10 00:48:22.743471 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.96s 2025-09-10 00:48:22.743482 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.86s 2025-09-10 00:48:22.743493 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.65s 2025-09-10 00:48:22.743503 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.58s 2025-09-10 00:48:22.743513 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.56s 2025-09-10 00:48:22.743524 | orchestrator | module-load : Load modules ---------------------------------------------- 1.48s 2025-09-10 00:48:22.743535 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.32s 2025-09-10 00:48:22.743546 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.09s 2025-09-10 00:48:22.743557 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2025-09-10 00:48:22.743568 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-09-10 00:48:22.743585 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-09-10 00:48:22.743596 | orchestrator | 2025-09-10 00:48:22 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:22.743607 | orchestrator | 2025-09-10 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:25.780109 | orchestrator | 2025-09-10 00:48:25 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:25.780219 | orchestrator | 2025-09-10 00:48:25 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:25.782809 | orchestrator | 2025-09-10 00:48:25 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:25.783155 | orchestrator | 2025-09-10 00:48:25 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:25.783965 | orchestrator | 2025-09-10 00:48:25 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:25.784001 | orchestrator | 2025-09-10 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:29.063932 | orchestrator | 2025-09-10 00:48:29 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:29.064027 | orchestrator | 2025-09-10 00:48:29 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:29.064042 | orchestrator | 2025-09-10 00:48:29 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:29.064052 | orchestrator | 2025-09-10 00:48:29 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:29.064063 | orchestrator | 2025-09-10 00:48:29 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:29.064074 | orchestrator | 2025-09-10 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:32.114855 | orchestrator | 2025-09-10 00:48:32 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state STARTED 2025-09-10 00:48:32.115080 | orchestrator | 2025-09-10 00:48:32 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:32.116282 | orchestrator | 2025-09-10 00:48:32 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:32.116615 | orchestrator | 2025-09-10 00:48:32 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:32.117190 | orchestrator | 2025-09-10 00:48:32 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:32.117212 | orchestrator | 2025-09-10 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:35.147539 | orchestrator | 2025-09-10 00:48:35 | INFO  | Task bd1cf9f1-0193-4c86-8ddd-73535aeff97b is in state SUCCESS 2025-09-10 00:48:35.149151 | orchestrator | 2025-09-10 00:48:35.149199 | orchestrator | 2025-09-10 00:48:35.149212 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-10 00:48:35.149224 | orchestrator | 2025-09-10 00:48:35.149236 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-10 00:48:35.149248 | orchestrator | Wednesday 10 September 2025 00:44:45 +0000 (0:00:00.304) 0:00:00.304 *** 2025-09-10 00:48:35.149259 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.149271 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.149282 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.149293 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.149305 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.149315 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.149349 | orchestrator | 2025-09-10 00:48:35.149361 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-10 00:48:35.149372 | orchestrator | Wednesday 10 September 2025 00:44:46 +0000 (0:00:00.754) 0:00:01.058 *** 2025-09-10 00:48:35.149384 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.149411 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.149423 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.149469 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.149481 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.149492 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.149503 | orchestrator | 2025-09-10 00:48:35.149514 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-10 00:48:35.149525 | orchestrator | Wednesday 10 September 2025 00:44:47 +0000 (0:00:00.739) 0:00:01.798 *** 2025-09-10 00:48:35.149536 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.149547 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.149558 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.149569 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.149580 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.149591 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.149602 | orchestrator | 2025-09-10 00:48:35.149613 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-10 00:48:35.149633 | orchestrator | Wednesday 10 September 2025 00:44:47 +0000 (0:00:00.804) 0:00:02.602 *** 2025-09-10 00:48:35.149645 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.149656 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.149666 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.149677 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.149687 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.149698 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.149709 | orchestrator | 2025-09-10 00:48:35.149719 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-10 00:48:35.149731 | orchestrator | Wednesday 10 September 2025 00:44:50 +0000 (0:00:02.030) 0:00:04.632 *** 2025-09-10 00:48:35.149744 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.149781 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.149794 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.149806 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.149819 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.149831 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.149843 | orchestrator | 2025-09-10 00:48:35.149856 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-10 00:48:35.149868 | orchestrator | Wednesday 10 September 2025 00:44:50 +0000 (0:00:00.964) 0:00:05.596 *** 2025-09-10 00:48:35.149880 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.149892 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.149904 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.149917 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.149928 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.149941 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.149953 | orchestrator | 2025-09-10 00:48:35.149965 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-10 00:48:35.149977 | orchestrator | Wednesday 10 September 2025 00:44:52 +0000 (0:00:01.053) 0:00:06.650 *** 2025-09-10 00:48:35.149989 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.150002 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.150014 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.150089 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.150102 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.150113 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.150124 | orchestrator | 2025-09-10 00:48:35.150134 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-10 00:48:35.150145 | orchestrator | Wednesday 10 September 2025 00:44:52 +0000 (0:00:00.742) 0:00:07.393 *** 2025-09-10 00:48:35.150156 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.150166 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.150177 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.150187 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.150198 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.150209 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.150220 | orchestrator | 2025-09-10 00:48:35.150231 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-10 00:48:35.150241 | orchestrator | Wednesday 10 September 2025 00:44:53 +0000 (0:00:00.728) 0:00:08.121 *** 2025-09-10 00:48:35.150252 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 00:48:35.150263 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 00:48:35.150273 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.150284 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 00:48:35.150294 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 00:48:35.150305 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.150316 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 00:48:35.150326 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 00:48:35.150337 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.150347 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 00:48:35.150372 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 00:48:35.150383 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.150394 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 00:48:35.150404 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 00:48:35.150415 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.150426 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 00:48:35.150463 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 00:48:35.150475 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.150485 | orchestrator | 2025-09-10 00:48:35.150496 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-10 00:48:35.150507 | orchestrator | Wednesday 10 September 2025 00:44:54 +0000 (0:00:00.718) 0:00:08.840 *** 2025-09-10 00:48:35.150517 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.150528 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.150539 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.150549 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.150560 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.150570 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.150581 | orchestrator | 2025-09-10 00:48:35.150592 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-10 00:48:35.150604 | orchestrator | Wednesday 10 September 2025 00:44:55 +0000 (0:00:01.548) 0:00:10.388 *** 2025-09-10 00:48:35.150614 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.150625 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.150636 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.150646 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.150657 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.150668 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.150678 | orchestrator | 2025-09-10 00:48:35.150689 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-10 00:48:35.150706 | orchestrator | Wednesday 10 September 2025 00:44:56 +0000 (0:00:00.725) 0:00:11.114 *** 2025-09-10 00:48:35.150717 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.150728 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.150739 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.150749 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.150760 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.150771 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.150781 | orchestrator | 2025-09-10 00:48:35.150792 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-10 00:48:35.150803 | orchestrator | Wednesday 10 September 2025 00:45:02 +0000 (0:00:05.931) 0:00:17.045 *** 2025-09-10 00:48:35.150814 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.150824 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.150835 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.150846 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.150856 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.150867 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.150878 | orchestrator | 2025-09-10 00:48:35.150889 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-10 00:48:35.150899 | orchestrator | Wednesday 10 September 2025 00:45:03 +0000 (0:00:01.404) 0:00:18.450 *** 2025-09-10 00:48:35.150910 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.150920 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.150931 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.150941 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.150952 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.150962 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.150973 | orchestrator | 2025-09-10 00:48:35.150984 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-10 00:48:35.150997 | orchestrator | Wednesday 10 September 2025 00:45:06 +0000 (0:00:02.384) 0:00:20.835 *** 2025-09-10 00:48:35.151007 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.151018 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.151029 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.151039 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151050 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151068 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151078 | orchestrator | 2025-09-10 00:48:35.151089 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-10 00:48:35.151100 | orchestrator | Wednesday 10 September 2025 00:45:06 +0000 (0:00:00.753) 0:00:21.588 *** 2025-09-10 00:48:35.151111 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-10 00:48:35.151122 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-10 00:48:35.151133 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-10 00:48:35.151144 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-10 00:48:35.151154 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-10 00:48:35.151165 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-10 00:48:35.151176 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-10 00:48:35.151187 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-10 00:48:35.151197 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-10 00:48:35.151208 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-10 00:48:35.151219 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-10 00:48:35.151229 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-10 00:48:35.151240 | orchestrator | 2025-09-10 00:48:35.151251 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-10 00:48:35.151261 | orchestrator | Wednesday 10 September 2025 00:45:09 +0000 (0:00:02.731) 0:00:24.320 *** 2025-09-10 00:48:35.151272 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.151283 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.151293 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.151304 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.151315 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.151325 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.151336 | orchestrator | 2025-09-10 00:48:35.151353 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-10 00:48:35.151364 | orchestrator | 2025-09-10 00:48:35.151375 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-10 00:48:35.151386 | orchestrator | Wednesday 10 September 2025 00:45:11 +0000 (0:00:02.182) 0:00:26.502 *** 2025-09-10 00:48:35.151397 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151407 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151418 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151428 | orchestrator | 2025-09-10 00:48:35.151490 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-10 00:48:35.151502 | orchestrator | Wednesday 10 September 2025 00:45:13 +0000 (0:00:01.754) 0:00:28.257 *** 2025-09-10 00:48:35.151512 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151523 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151534 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151544 | orchestrator | 2025-09-10 00:48:35.151555 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-10 00:48:35.151566 | orchestrator | Wednesday 10 September 2025 00:45:15 +0000 (0:00:01.578) 0:00:29.835 *** 2025-09-10 00:48:35.151577 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151588 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151598 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151609 | orchestrator | 2025-09-10 00:48:35.151619 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-10 00:48:35.151630 | orchestrator | Wednesday 10 September 2025 00:45:16 +0000 (0:00:00.998) 0:00:30.834 *** 2025-09-10 00:48:35.151641 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151652 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151662 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151673 | orchestrator | 2025-09-10 00:48:35.151684 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-10 00:48:35.151694 | orchestrator | Wednesday 10 September 2025 00:45:17 +0000 (0:00:01.727) 0:00:32.561 *** 2025-09-10 00:48:35.151713 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.151724 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.151740 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.151750 | orchestrator | 2025-09-10 00:48:35.151760 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-10 00:48:35.151770 | orchestrator | Wednesday 10 September 2025 00:45:18 +0000 (0:00:00.425) 0:00:32.987 *** 2025-09-10 00:48:35.151779 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151789 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151798 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151808 | orchestrator | 2025-09-10 00:48:35.151818 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-10 00:48:35.151827 | orchestrator | Wednesday 10 September 2025 00:45:19 +0000 (0:00:00.895) 0:00:33.882 *** 2025-09-10 00:48:35.151837 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.151847 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.151856 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.151866 | orchestrator | 2025-09-10 00:48:35.151876 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-10 00:48:35.151885 | orchestrator | Wednesday 10 September 2025 00:45:20 +0000 (0:00:01.628) 0:00:35.511 *** 2025-09-10 00:48:35.151895 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:48:35.151904 | orchestrator | 2025-09-10 00:48:35.151914 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-10 00:48:35.151923 | orchestrator | Wednesday 10 September 2025 00:45:21 +0000 (0:00:01.056) 0:00:36.567 *** 2025-09-10 00:48:35.151933 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.151943 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.151952 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.151962 | orchestrator | 2025-09-10 00:48:35.151971 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-10 00:48:35.151981 | orchestrator | Wednesday 10 September 2025 00:45:24 +0000 (0:00:02.246) 0:00:38.814 *** 2025-09-10 00:48:35.151990 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.152000 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.152010 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152019 | orchestrator | 2025-09-10 00:48:35.152029 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-10 00:48:35.152038 | orchestrator | Wednesday 10 September 2025 00:45:24 +0000 (0:00:00.575) 0:00:39.389 *** 2025-09-10 00:48:35.152048 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.152058 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.152067 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152077 | orchestrator | 2025-09-10 00:48:35.152086 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-10 00:48:35.152096 | orchestrator | Wednesday 10 September 2025 00:45:26 +0000 (0:00:01.557) 0:00:40.946 *** 2025-09-10 00:48:35.152105 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.152115 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.152124 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152134 | orchestrator | 2025-09-10 00:48:35.152144 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-10 00:48:35.152153 | orchestrator | Wednesday 10 September 2025 00:45:28 +0000 (0:00:01.813) 0:00:42.759 *** 2025-09-10 00:48:35.152163 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.152172 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.152182 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.152191 | orchestrator | 2025-09-10 00:48:35.152201 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-10 00:48:35.152210 | orchestrator | Wednesday 10 September 2025 00:45:28 +0000 (0:00:00.593) 0:00:43.352 *** 2025-09-10 00:48:35.152220 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.152229 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.152244 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.152254 | orchestrator | 2025-09-10 00:48:35.152263 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-10 00:48:35.152273 | orchestrator | Wednesday 10 September 2025 00:45:29 +0000 (0:00:00.507) 0:00:43.860 *** 2025-09-10 00:48:35.152283 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152292 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.152302 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.152312 | orchestrator | 2025-09-10 00:48:35.152327 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-10 00:48:35.152338 | orchestrator | Wednesday 10 September 2025 00:45:31 +0000 (0:00:01.947) 0:00:45.807 *** 2025-09-10 00:48:35.152348 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-10 00:48:35.152358 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-10 00:48:35.152368 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-10 00:48:35.152378 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-10 00:48:35.152388 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-10 00:48:35.152397 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-10 00:48:35.152406 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-10 00:48:35.152416 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-10 00:48:35.152429 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-10 00:48:35.152454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-10 00:48:35.152464 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-10 00:48:35.152473 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-10 00:48:35.152483 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-10 00:48:35.152492 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-10 00:48:35.152502 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-10 00:48:35.152511 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.152521 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.152531 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.152540 | orchestrator | 2025-09-10 00:48:35.152550 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-10 00:48:35.152559 | orchestrator | Wednesday 10 September 2025 00:46:26 +0000 (0:00:55.576) 0:01:41.384 *** 2025-09-10 00:48:35.152569 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.152578 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.152588 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.152597 | orchestrator | 2025-09-10 00:48:35.152607 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-10 00:48:35.152622 | orchestrator | Wednesday 10 September 2025 00:46:27 +0000 (0:00:00.429) 0:01:41.813 *** 2025-09-10 00:48:35.152632 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152642 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.152651 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.152661 | orchestrator | 2025-09-10 00:48:35.152670 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-10 00:48:35.152680 | orchestrator | Wednesday 10 September 2025 00:46:28 +0000 (0:00:01.180) 0:01:42.994 *** 2025-09-10 00:48:35.152689 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152699 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.152708 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.152718 | orchestrator | 2025-09-10 00:48:35.152727 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-10 00:48:35.152737 | orchestrator | Wednesday 10 September 2025 00:46:29 +0000 (0:00:01.393) 0:01:44.388 *** 2025-09-10 00:48:35.152746 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.152756 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152765 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.152774 | orchestrator | 2025-09-10 00:48:35.152784 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-10 00:48:35.152793 | orchestrator | Wednesday 10 September 2025 00:46:55 +0000 (0:00:25.283) 0:02:09.672 *** 2025-09-10 00:48:35.152803 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.152812 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.152822 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.152831 | orchestrator | 2025-09-10 00:48:35.152841 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-10 00:48:35.152850 | orchestrator | Wednesday 10 September 2025 00:46:55 +0000 (0:00:00.722) 0:02:10.394 *** 2025-09-10 00:48:35.152860 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.152869 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.152879 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.152888 | orchestrator | 2025-09-10 00:48:35.152902 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-10 00:48:35.152912 | orchestrator | Wednesday 10 September 2025 00:46:56 +0000 (0:00:00.626) 0:02:11.021 *** 2025-09-10 00:48:35.152922 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.152931 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.152941 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.152950 | orchestrator | 2025-09-10 00:48:35.152960 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-10 00:48:35.152969 | orchestrator | Wednesday 10 September 2025 00:46:57 +0000 (0:00:00.633) 0:02:11.654 *** 2025-09-10 00:48:35.152979 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.152988 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.152998 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.153007 | orchestrator | 2025-09-10 00:48:35.153016 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-10 00:48:35.153026 | orchestrator | Wednesday 10 September 2025 00:46:57 +0000 (0:00:00.858) 0:02:12.512 *** 2025-09-10 00:48:35.153035 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.153045 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.153054 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.153063 | orchestrator | 2025-09-10 00:48:35.153073 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-10 00:48:35.153082 | orchestrator | Wednesday 10 September 2025 00:46:58 +0000 (0:00:00.302) 0:02:12.815 *** 2025-09-10 00:48:35.153092 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.153101 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.153111 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.153120 | orchestrator | 2025-09-10 00:48:35.153130 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-10 00:48:35.153139 | orchestrator | Wednesday 10 September 2025 00:46:58 +0000 (0:00:00.622) 0:02:13.438 *** 2025-09-10 00:48:35.153155 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.153164 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.153173 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.153183 | orchestrator | 2025-09-10 00:48:35.153197 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-10 00:48:35.153207 | orchestrator | Wednesday 10 September 2025 00:46:59 +0000 (0:00:00.636) 0:02:14.074 *** 2025-09-10 00:48:35.153216 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.153226 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.153235 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.153244 | orchestrator | 2025-09-10 00:48:35.153254 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-10 00:48:35.153263 | orchestrator | Wednesday 10 September 2025 00:47:00 +0000 (0:00:01.142) 0:02:15.217 *** 2025-09-10 00:48:35.153273 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:48:35.153282 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:48:35.153292 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:48:35.153301 | orchestrator | 2025-09-10 00:48:35.153310 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-10 00:48:35.153320 | orchestrator | Wednesday 10 September 2025 00:47:01 +0000 (0:00:00.830) 0:02:16.047 *** 2025-09-10 00:48:35.153330 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.153339 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.153348 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.153358 | orchestrator | 2025-09-10 00:48:35.153367 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-10 00:48:35.153377 | orchestrator | Wednesday 10 September 2025 00:47:01 +0000 (0:00:00.274) 0:02:16.321 *** 2025-09-10 00:48:35.153386 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.153395 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.153405 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.153414 | orchestrator | 2025-09-10 00:48:35.153424 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-10 00:48:35.153472 | orchestrator | Wednesday 10 September 2025 00:47:01 +0000 (0:00:00.257) 0:02:16.579 *** 2025-09-10 00:48:35.153483 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.153492 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.153500 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.153507 | orchestrator | 2025-09-10 00:48:35.153515 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-10 00:48:35.153523 | orchestrator | Wednesday 10 September 2025 00:47:02 +0000 (0:00:00.804) 0:02:17.383 *** 2025-09-10 00:48:35.153531 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.153539 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.153546 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.153554 | orchestrator | 2025-09-10 00:48:35.153562 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-10 00:48:35.153570 | orchestrator | Wednesday 10 September 2025 00:47:03 +0000 (0:00:00.594) 0:02:17.978 *** 2025-09-10 00:48:35.153578 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-10 00:48:35.153586 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-10 00:48:35.153594 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-10 00:48:35.153601 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-10 00:48:35.153609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-10 00:48:35.153617 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-10 00:48:35.153625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-10 00:48:35.153638 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-10 00:48:35.153646 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-10 00:48:35.153659 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-10 00:48:35.153667 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-10 00:48:35.153675 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-10 00:48:35.153683 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-10 00:48:35.153690 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-10 00:48:35.153698 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-10 00:48:35.153706 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-10 00:48:35.153713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-10 00:48:35.153721 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-10 00:48:35.153729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-10 00:48:35.153736 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-10 00:48:35.153744 | orchestrator | 2025-09-10 00:48:35.153752 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-10 00:48:35.153760 | orchestrator | 2025-09-10 00:48:35.153768 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-10 00:48:35.153776 | orchestrator | Wednesday 10 September 2025 00:47:06 +0000 (0:00:03.069) 0:02:21.047 *** 2025-09-10 00:48:35.153783 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.153791 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.153803 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.153811 | orchestrator | 2025-09-10 00:48:35.153819 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-10 00:48:35.153826 | orchestrator | Wednesday 10 September 2025 00:47:06 +0000 (0:00:00.525) 0:02:21.573 *** 2025-09-10 00:48:35.153834 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.153842 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.153850 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.153857 | orchestrator | 2025-09-10 00:48:35.153865 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-10 00:48:35.153873 | orchestrator | Wednesday 10 September 2025 00:47:07 +0000 (0:00:00.608) 0:02:22.181 *** 2025-09-10 00:48:35.153880 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.153888 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.153896 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.153903 | orchestrator | 2025-09-10 00:48:35.153911 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-10 00:48:35.153919 | orchestrator | Wednesday 10 September 2025 00:47:07 +0000 (0:00:00.337) 0:02:22.518 *** 2025-09-10 00:48:35.153927 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:48:35.153935 | orchestrator | 2025-09-10 00:48:35.153943 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-10 00:48:35.153950 | orchestrator | Wednesday 10 September 2025 00:47:08 +0000 (0:00:00.694) 0:02:23.213 *** 2025-09-10 00:48:35.153958 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.153966 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.153974 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.153981 | orchestrator | 2025-09-10 00:48:35.153989 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-10 00:48:35.154006 | orchestrator | Wednesday 10 September 2025 00:47:08 +0000 (0:00:00.336) 0:02:23.549 *** 2025-09-10 00:48:35.154014 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.154054 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.154062 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.154070 | orchestrator | 2025-09-10 00:48:35.154078 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-10 00:48:35.154086 | orchestrator | Wednesday 10 September 2025 00:47:09 +0000 (0:00:00.370) 0:02:23.920 *** 2025-09-10 00:48:35.154093 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.154102 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.154109 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.154117 | orchestrator | 2025-09-10 00:48:35.154125 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-10 00:48:35.154132 | orchestrator | Wednesday 10 September 2025 00:47:09 +0000 (0:00:00.300) 0:02:24.221 *** 2025-09-10 00:48:35.154140 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.154148 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.154156 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.154163 | orchestrator | 2025-09-10 00:48:35.154171 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-10 00:48:35.154179 | orchestrator | Wednesday 10 September 2025 00:47:10 +0000 (0:00:00.652) 0:02:24.874 *** 2025-09-10 00:48:35.154187 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.154194 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.154202 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.154210 | orchestrator | 2025-09-10 00:48:35.154218 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-10 00:48:35.154225 | orchestrator | Wednesday 10 September 2025 00:47:11 +0000 (0:00:01.409) 0:02:26.284 *** 2025-09-10 00:48:35.154233 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.154241 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.154249 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.154256 | orchestrator | 2025-09-10 00:48:35.154264 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-10 00:48:35.154272 | orchestrator | Wednesday 10 September 2025 00:47:12 +0000 (0:00:01.228) 0:02:27.512 *** 2025-09-10 00:48:35.154279 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:48:35.154287 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:48:35.154295 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:48:35.154303 | orchestrator | 2025-09-10 00:48:35.154316 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-10 00:48:35.154324 | orchestrator | 2025-09-10 00:48:35.154332 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-10 00:48:35.154340 | orchestrator | Wednesday 10 September 2025 00:47:25 +0000 (0:00:12.855) 0:02:40.367 *** 2025-09-10 00:48:35.154348 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.154356 | orchestrator | 2025-09-10 00:48:35.154363 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-10 00:48:35.154371 | orchestrator | Wednesday 10 September 2025 00:47:26 +0000 (0:00:00.709) 0:02:41.077 *** 2025-09-10 00:48:35.154379 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154387 | orchestrator | 2025-09-10 00:48:35.154395 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-10 00:48:35.154403 | orchestrator | Wednesday 10 September 2025 00:47:26 +0000 (0:00:00.366) 0:02:41.444 *** 2025-09-10 00:48:35.154410 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-10 00:48:35.154418 | orchestrator | 2025-09-10 00:48:35.154426 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-10 00:48:35.154446 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:00.521) 0:02:41.965 *** 2025-09-10 00:48:35.154455 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154462 | orchestrator | 2025-09-10 00:48:35.154470 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-10 00:48:35.154483 | orchestrator | Wednesday 10 September 2025 00:47:28 +0000 (0:00:01.006) 0:02:42.972 *** 2025-09-10 00:48:35.154491 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154499 | orchestrator | 2025-09-10 00:48:35.154507 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-10 00:48:35.154515 | orchestrator | Wednesday 10 September 2025 00:47:29 +0000 (0:00:00.790) 0:02:43.762 *** 2025-09-10 00:48:35.154523 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-10 00:48:35.154530 | orchestrator | 2025-09-10 00:48:35.154542 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-10 00:48:35.154550 | orchestrator | Wednesday 10 September 2025 00:47:31 +0000 (0:00:02.104) 0:02:45.867 *** 2025-09-10 00:48:35.154558 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-10 00:48:35.154566 | orchestrator | 2025-09-10 00:48:35.154573 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-10 00:48:35.154581 | orchestrator | Wednesday 10 September 2025 00:47:32 +0000 (0:00:00.876) 0:02:46.744 *** 2025-09-10 00:48:35.154589 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154597 | orchestrator | 2025-09-10 00:48:35.154604 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-10 00:48:35.154612 | orchestrator | Wednesday 10 September 2025 00:47:32 +0000 (0:00:00.471) 0:02:47.215 *** 2025-09-10 00:48:35.154620 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154627 | orchestrator | 2025-09-10 00:48:35.154635 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-10 00:48:35.154643 | orchestrator | 2025-09-10 00:48:35.154651 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-10 00:48:35.154658 | orchestrator | Wednesday 10 September 2025 00:47:33 +0000 (0:00:01.196) 0:02:48.411 *** 2025-09-10 00:48:35.154666 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.154674 | orchestrator | 2025-09-10 00:48:35.154682 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-10 00:48:35.154689 | orchestrator | Wednesday 10 September 2025 00:47:33 +0000 (0:00:00.107) 0:02:48.518 *** 2025-09-10 00:48:35.154697 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-10 00:48:35.154705 | orchestrator | 2025-09-10 00:48:35.154713 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-10 00:48:35.154720 | orchestrator | Wednesday 10 September 2025 00:47:34 +0000 (0:00:00.192) 0:02:48.711 *** 2025-09-10 00:48:35.154728 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.154736 | orchestrator | 2025-09-10 00:48:35.154743 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-10 00:48:35.154751 | orchestrator | Wednesday 10 September 2025 00:47:34 +0000 (0:00:00.807) 0:02:49.519 *** 2025-09-10 00:48:35.154759 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.154767 | orchestrator | 2025-09-10 00:48:35.154774 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-10 00:48:35.154782 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:01.338) 0:02:50.857 *** 2025-09-10 00:48:35.154790 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154798 | orchestrator | 2025-09-10 00:48:35.154805 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-10 00:48:35.154813 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:00.730) 0:02:51.588 *** 2025-09-10 00:48:35.154821 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.154829 | orchestrator | 2025-09-10 00:48:35.154837 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-10 00:48:35.154844 | orchestrator | Wednesday 10 September 2025 00:47:37 +0000 (0:00:00.564) 0:02:52.152 *** 2025-09-10 00:48:35.154852 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154860 | orchestrator | 2025-09-10 00:48:35.154867 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-10 00:48:35.154875 | orchestrator | Wednesday 10 September 2025 00:47:46 +0000 (0:00:09.442) 0:03:01.595 *** 2025-09-10 00:48:35.154890 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.154898 | orchestrator | 2025-09-10 00:48:35.154906 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-10 00:48:35.154914 | orchestrator | Wednesday 10 September 2025 00:48:02 +0000 (0:00:15.665) 0:03:17.260 *** 2025-09-10 00:48:35.154922 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.154929 | orchestrator | 2025-09-10 00:48:35.154937 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-10 00:48:35.154945 | orchestrator | 2025-09-10 00:48:35.154953 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-10 00:48:35.154965 | orchestrator | Wednesday 10 September 2025 00:48:03 +0000 (0:00:00.809) 0:03:18.069 *** 2025-09-10 00:48:35.154973 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.154980 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.154988 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.154996 | orchestrator | 2025-09-10 00:48:35.155004 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-10 00:48:35.155011 | orchestrator | Wednesday 10 September 2025 00:48:03 +0000 (0:00:00.447) 0:03:18.516 *** 2025-09-10 00:48:35.155019 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155027 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.155035 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.155042 | orchestrator | 2025-09-10 00:48:35.155050 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-10 00:48:35.155058 | orchestrator | Wednesday 10 September 2025 00:48:04 +0000 (0:00:00.386) 0:03:18.904 *** 2025-09-10 00:48:35.155066 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:48:35.155073 | orchestrator | 2025-09-10 00:48:35.155081 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-10 00:48:35.155089 | orchestrator | Wednesday 10 September 2025 00:48:04 +0000 (0:00:00.629) 0:03:19.533 *** 2025-09-10 00:48:35.155097 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155104 | orchestrator | 2025-09-10 00:48:35.155112 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-10 00:48:35.155120 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.232) 0:03:19.765 *** 2025-09-10 00:48:35.155128 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155135 | orchestrator | 2025-09-10 00:48:35.155143 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-10 00:48:35.155151 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.260) 0:03:20.025 *** 2025-09-10 00:48:35.155159 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155167 | orchestrator | 2025-09-10 00:48:35.155178 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-10 00:48:35.155186 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.192) 0:03:20.218 *** 2025-09-10 00:48:35.155194 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155202 | orchestrator | 2025-09-10 00:48:35.155210 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-10 00:48:35.155217 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.185) 0:03:20.403 *** 2025-09-10 00:48:35.155225 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155233 | orchestrator | 2025-09-10 00:48:35.155241 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-10 00:48:35.155248 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:00.224) 0:03:20.627 *** 2025-09-10 00:48:35.155256 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155264 | orchestrator | 2025-09-10 00:48:35.155272 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-10 00:48:35.155279 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:00.221) 0:03:20.849 *** 2025-09-10 00:48:35.155287 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155300 | orchestrator | 2025-09-10 00:48:35.155308 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-10 00:48:35.155316 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:00.212) 0:03:21.061 *** 2025-09-10 00:48:35.155324 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155331 | orchestrator | 2025-09-10 00:48:35.155339 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-10 00:48:35.155347 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:00.200) 0:03:21.262 *** 2025-09-10 00:48:35.155355 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155363 | orchestrator | 2025-09-10 00:48:35.155371 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-10 00:48:35.155378 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:00.203) 0:03:21.466 *** 2025-09-10 00:48:35.155386 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-10 00:48:35.155394 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-10 00:48:35.155401 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155409 | orchestrator | 2025-09-10 00:48:35.155417 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-10 00:48:35.155425 | orchestrator | Wednesday 10 September 2025 00:48:07 +0000 (0:00:00.596) 0:03:22.062 *** 2025-09-10 00:48:35.155446 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155454 | orchestrator | 2025-09-10 00:48:35.155462 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-10 00:48:35.155470 | orchestrator | Wednesday 10 September 2025 00:48:07 +0000 (0:00:00.255) 0:03:22.318 *** 2025-09-10 00:48:35.155478 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155485 | orchestrator | 2025-09-10 00:48:35.155493 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-10 00:48:35.155501 | orchestrator | Wednesday 10 September 2025 00:48:07 +0000 (0:00:00.236) 0:03:22.554 *** 2025-09-10 00:48:35.155509 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155517 | orchestrator | 2025-09-10 00:48:35.155524 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-10 00:48:35.155532 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.246) 0:03:22.800 *** 2025-09-10 00:48:35.155540 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155548 | orchestrator | 2025-09-10 00:48:35.155556 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-10 00:48:35.155563 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.209) 0:03:23.010 *** 2025-09-10 00:48:35.155571 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155579 | orchestrator | 2025-09-10 00:48:35.155587 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-10 00:48:35.155595 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.190) 0:03:23.201 *** 2025-09-10 00:48:35.155603 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155610 | orchestrator | 2025-09-10 00:48:35.155618 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-10 00:48:35.155630 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.160) 0:03:23.362 *** 2025-09-10 00:48:35.155638 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155646 | orchestrator | 2025-09-10 00:48:35.155654 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-10 00:48:35.155662 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.156) 0:03:23.518 *** 2025-09-10 00:48:35.155670 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155677 | orchestrator | 2025-09-10 00:48:35.155685 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-10 00:48:35.155694 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:00.209) 0:03:23.728 *** 2025-09-10 00:48:35.155701 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155709 | orchestrator | 2025-09-10 00:48:35.155717 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-10 00:48:35.155729 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:00.179) 0:03:23.908 *** 2025-09-10 00:48:35.155737 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155745 | orchestrator | 2025-09-10 00:48:35.155753 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-10 00:48:35.155761 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:00.203) 0:03:24.112 *** 2025-09-10 00:48:35.155768 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155776 | orchestrator | 2025-09-10 00:48:35.155784 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-10 00:48:35.155792 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:00.207) 0:03:24.320 *** 2025-09-10 00:48:35.155799 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-10 00:48:35.155807 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-10 00:48:35.155815 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-10 00:48:35.155826 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-10 00:48:35.155834 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155842 | orchestrator | 2025-09-10 00:48:35.155850 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-10 00:48:35.155858 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:00.790) 0:03:25.110 *** 2025-09-10 00:48:35.155866 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155874 | orchestrator | 2025-09-10 00:48:35.155881 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-10 00:48:35.155889 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:00.247) 0:03:25.357 *** 2025-09-10 00:48:35.155897 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155905 | orchestrator | 2025-09-10 00:48:35.155913 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-10 00:48:35.155920 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:00.167) 0:03:25.525 *** 2025-09-10 00:48:35.155928 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155936 | orchestrator | 2025-09-10 00:48:35.155944 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-10 00:48:35.155952 | orchestrator | Wednesday 10 September 2025 00:48:11 +0000 (0:00:00.193) 0:03:25.718 *** 2025-09-10 00:48:35.155960 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.155967 | orchestrator | 2025-09-10 00:48:35.155975 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-10 00:48:35.155983 | orchestrator | Wednesday 10 September 2025 00:48:11 +0000 (0:00:00.175) 0:03:25.894 *** 2025-09-10 00:48:35.155991 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-10 00:48:35.155998 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-10 00:48:35.156006 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.156014 | orchestrator | 2025-09-10 00:48:35.156021 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-10 00:48:35.156029 | orchestrator | Wednesday 10 September 2025 00:48:11 +0000 (0:00:00.343) 0:03:26.238 *** 2025-09-10 00:48:35.156037 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.156045 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.156052 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.156060 | orchestrator | 2025-09-10 00:48:35.156068 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-10 00:48:35.156075 | orchestrator | Wednesday 10 September 2025 00:48:12 +0000 (0:00:00.409) 0:03:26.647 *** 2025-09-10 00:48:35.156083 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.156091 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.156098 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.156106 | orchestrator | 2025-09-10 00:48:35.156114 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-10 00:48:35.156126 | orchestrator | 2025-09-10 00:48:35.156134 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-10 00:48:35.156142 | orchestrator | Wednesday 10 September 2025 00:48:13 +0000 (0:00:01.230) 0:03:27.878 *** 2025-09-10 00:48:35.156149 | orchestrator | ok: [testbed-manager] 2025-09-10 00:48:35.156157 | orchestrator | 2025-09-10 00:48:35.156165 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-10 00:48:35.156173 | orchestrator | Wednesday 10 September 2025 00:48:13 +0000 (0:00:00.201) 0:03:28.079 *** 2025-09-10 00:48:35.156180 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-10 00:48:35.156188 | orchestrator | 2025-09-10 00:48:35.156196 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-10 00:48:35.156204 | orchestrator | Wednesday 10 September 2025 00:48:13 +0000 (0:00:00.356) 0:03:28.435 *** 2025-09-10 00:48:35.156212 | orchestrator | changed: [testbed-manager] 2025-09-10 00:48:35.156219 | orchestrator | 2025-09-10 00:48:35.156227 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-10 00:48:35.156235 | orchestrator | 2025-09-10 00:48:35.156243 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-10 00:48:35.156254 | orchestrator | Wednesday 10 September 2025 00:48:19 +0000 (0:00:05.394) 0:03:33.829 *** 2025-09-10 00:48:35.156262 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:48:35.156270 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:48:35.156278 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:48:35.156286 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:48:35.156294 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:48:35.156301 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:48:35.156309 | orchestrator | 2025-09-10 00:48:35.156317 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-10 00:48:35.156325 | orchestrator | Wednesday 10 September 2025 00:48:19 +0000 (0:00:00.533) 0:03:34.363 *** 2025-09-10 00:48:35.156333 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-10 00:48:35.156340 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-10 00:48:35.156348 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-10 00:48:35.156356 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-10 00:48:35.156364 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-10 00:48:35.156371 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-10 00:48:35.156379 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-10 00:48:35.156387 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-10 00:48:35.156395 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-10 00:48:35.156402 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-10 00:48:35.156414 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-10 00:48:35.156422 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-10 00:48:35.156429 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-10 00:48:35.156472 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-10 00:48:35.156480 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-10 00:48:35.156488 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-10 00:48:35.156496 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-10 00:48:35.156510 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-10 00:48:35.156518 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-10 00:48:35.156526 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-10 00:48:35.156533 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-10 00:48:35.156541 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-10 00:48:35.156549 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-10 00:48:35.156557 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-10 00:48:35.156565 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-10 00:48:35.156573 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-10 00:48:35.156580 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-10 00:48:35.156588 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-10 00:48:35.156596 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-10 00:48:35.156604 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-10 00:48:35.156612 | orchestrator | 2025-09-10 00:48:35.156620 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-10 00:48:35.156627 | orchestrator | Wednesday 10 September 2025 00:48:33 +0000 (0:00:13.424) 0:03:47.788 *** 2025-09-10 00:48:35.156636 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.156644 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.156652 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.156659 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.156667 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.156675 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.156683 | orchestrator | 2025-09-10 00:48:35.156691 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-10 00:48:35.156698 | orchestrator | Wednesday 10 September 2025 00:48:33 +0000 (0:00:00.632) 0:03:48.420 *** 2025-09-10 00:48:35.156706 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:48:35.156714 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:48:35.156722 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:48:35.156730 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:48:35.156738 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:48:35.156745 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:48:35.156753 | orchestrator | 2025-09-10 00:48:35.156761 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:48:35.156774 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:48:35.156783 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-10 00:48:35.156791 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-10 00:48:35.156799 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-10 00:48:35.156807 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-10 00:48:35.156815 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-10 00:48:35.156827 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-10 00:48:35.156834 | orchestrator | 2025-09-10 00:48:35.156842 | orchestrator | 2025-09-10 00:48:35.156850 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:48:35.156857 | orchestrator | Wednesday 10 September 2025 00:48:34 +0000 (0:00:00.410) 0:03:48.830 *** 2025-09-10 00:48:35.156864 | orchestrator | =============================================================================== 2025-09-10 00:48:35.156874 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.58s 2025-09-10 00:48:35.156881 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.28s 2025-09-10 00:48:35.156887 | orchestrator | kubectl : Install required packages ------------------------------------ 15.67s 2025-09-10 00:48:35.156894 | orchestrator | Manage labels ---------------------------------------------------------- 13.42s 2025-09-10 00:48:35.156900 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.86s 2025-09-10 00:48:35.156907 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.44s 2025-09-10 00:48:35.156913 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.93s 2025-09-10 00:48:35.156920 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.39s 2025-09-10 00:48:35.156926 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2025-09-10 00:48:35.156933 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.73s 2025-09-10 00:48:35.156940 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.38s 2025-09-10 00:48:35.156946 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.25s 2025-09-10 00:48:35.156953 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.18s 2025-09-10 00:48:35.156960 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.10s 2025-09-10 00:48:35.156966 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.03s 2025-09-10 00:48:35.156972 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.95s 2025-09-10 00:48:35.156979 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.81s 2025-09-10 00:48:35.156985 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.75s 2025-09-10 00:48:35.156992 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.73s 2025-09-10 00:48:35.156998 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.63s 2025-09-10 00:48:35.157005 | orchestrator | 2025-09-10 00:48:35 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:35.157012 | orchestrator | 2025-09-10 00:48:35 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:35.157018 | orchestrator | 2025-09-10 00:48:35 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:35.157025 | orchestrator | 2025-09-10 00:48:35 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:35.157032 | orchestrator | 2025-09-10 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:38.177307 | orchestrator | 2025-09-10 00:48:38 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:38.177721 | orchestrator | 2025-09-10 00:48:38 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:38.178548 | orchestrator | 2025-09-10 00:48:38 | INFO  | Task 810d5714-9416-415c-b02a-d147b4d954ce is in state STARTED 2025-09-10 00:48:38.180219 | orchestrator | 2025-09-10 00:48:38 | INFO  | Task 70c64356-d6ab-4e0e-bf4d-7174690c06df is in state STARTED 2025-09-10 00:48:38.180276 | orchestrator | 2025-09-10 00:48:38 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:38.181026 | orchestrator | 2025-09-10 00:48:38 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:38.181049 | orchestrator | 2025-09-10 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:41.211172 | orchestrator | 2025-09-10 00:48:41 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:41.211471 | orchestrator | 2025-09-10 00:48:41 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:41.212291 | orchestrator | 2025-09-10 00:48:41 | INFO  | Task 810d5714-9416-415c-b02a-d147b4d954ce is in state STARTED 2025-09-10 00:48:41.213024 | orchestrator | 2025-09-10 00:48:41 | INFO  | Task 70c64356-d6ab-4e0e-bf4d-7174690c06df is in state SUCCESS 2025-09-10 00:48:41.213865 | orchestrator | 2025-09-10 00:48:41 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:41.214736 | orchestrator | 2025-09-10 00:48:41 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:41.214935 | orchestrator | 2025-09-10 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:44.254621 | orchestrator | 2025-09-10 00:48:44 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:44.257402 | orchestrator | 2025-09-10 00:48:44 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:44.257515 | orchestrator | 2025-09-10 00:48:44 | INFO  | Task 810d5714-9416-415c-b02a-d147b4d954ce is in state STARTED 2025-09-10 00:48:44.258375 | orchestrator | 2025-09-10 00:48:44 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:44.259844 | orchestrator | 2025-09-10 00:48:44 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:44.261571 | orchestrator | 2025-09-10 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:47.313879 | orchestrator | 2025-09-10 00:48:47 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:47.315720 | orchestrator | 2025-09-10 00:48:47 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:47.316923 | orchestrator | 2025-09-10 00:48:47 | INFO  | Task 810d5714-9416-415c-b02a-d147b4d954ce is in state SUCCESS 2025-09-10 00:48:47.318767 | orchestrator | 2025-09-10 00:48:47 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:47.319935 | orchestrator | 2025-09-10 00:48:47 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:47.319973 | orchestrator | 2025-09-10 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:50.367749 | orchestrator | 2025-09-10 00:48:50 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:50.368091 | orchestrator | 2025-09-10 00:48:50 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:50.369902 | orchestrator | 2025-09-10 00:48:50 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:50.371950 | orchestrator | 2025-09-10 00:48:50 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:50.372095 | orchestrator | 2025-09-10 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:53.422292 | orchestrator | 2025-09-10 00:48:53 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:53.424223 | orchestrator | 2025-09-10 00:48:53 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:53.426458 | orchestrator | 2025-09-10 00:48:53 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:53.428578 | orchestrator | 2025-09-10 00:48:53 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:53.428605 | orchestrator | 2025-09-10 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:56.473204 | orchestrator | 2025-09-10 00:48:56 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:56.474801 | orchestrator | 2025-09-10 00:48:56 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:56.476511 | orchestrator | 2025-09-10 00:48:56 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:56.478375 | orchestrator | 2025-09-10 00:48:56 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:56.478620 | orchestrator | 2025-09-10 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:48:59.520480 | orchestrator | 2025-09-10 00:48:59 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:48:59.521920 | orchestrator | 2025-09-10 00:48:59 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:48:59.524103 | orchestrator | 2025-09-10 00:48:59 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:48:59.526719 | orchestrator | 2025-09-10 00:48:59 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:48:59.526744 | orchestrator | 2025-09-10 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:02.573041 | orchestrator | 2025-09-10 00:49:02 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:02.573718 | orchestrator | 2025-09-10 00:49:02 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:02.575265 | orchestrator | 2025-09-10 00:49:02 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:02.578161 | orchestrator | 2025-09-10 00:49:02 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:02.578188 | orchestrator | 2025-09-10 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:05.621213 | orchestrator | 2025-09-10 00:49:05 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:05.622974 | orchestrator | 2025-09-10 00:49:05 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:05.625020 | orchestrator | 2025-09-10 00:49:05 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:05.626833 | orchestrator | 2025-09-10 00:49:05 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:05.627111 | orchestrator | 2025-09-10 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:08.669338 | orchestrator | 2025-09-10 00:49:08 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:08.675672 | orchestrator | 2025-09-10 00:49:08 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:08.679551 | orchestrator | 2025-09-10 00:49:08 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:08.683542 | orchestrator | 2025-09-10 00:49:08 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:08.684642 | orchestrator | 2025-09-10 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:11.736630 | orchestrator | 2025-09-10 00:49:11 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:11.738199 | orchestrator | 2025-09-10 00:49:11 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:11.739472 | orchestrator | 2025-09-10 00:49:11 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:11.743258 | orchestrator | 2025-09-10 00:49:11 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:11.743283 | orchestrator | 2025-09-10 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:14.794846 | orchestrator | 2025-09-10 00:49:14 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:14.795578 | orchestrator | 2025-09-10 00:49:14 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:14.797709 | orchestrator | 2025-09-10 00:49:14 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:14.798907 | orchestrator | 2025-09-10 00:49:14 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:14.798934 | orchestrator | 2025-09-10 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:17.839274 | orchestrator | 2025-09-10 00:49:17 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:17.840538 | orchestrator | 2025-09-10 00:49:17 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:17.842964 | orchestrator | 2025-09-10 00:49:17 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:17.845227 | orchestrator | 2025-09-10 00:49:17 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:17.845661 | orchestrator | 2025-09-10 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:20.883083 | orchestrator | 2025-09-10 00:49:20 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:20.884260 | orchestrator | 2025-09-10 00:49:20 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:20.885499 | orchestrator | 2025-09-10 00:49:20 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:20.886738 | orchestrator | 2025-09-10 00:49:20 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:20.886764 | orchestrator | 2025-09-10 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:23.924818 | orchestrator | 2025-09-10 00:49:23 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:23.925645 | orchestrator | 2025-09-10 00:49:23 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:23.928165 | orchestrator | 2025-09-10 00:49:23 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:23.929242 | orchestrator | 2025-09-10 00:49:23 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:23.929265 | orchestrator | 2025-09-10 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:26.972902 | orchestrator | 2025-09-10 00:49:26 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:26.975232 | orchestrator | 2025-09-10 00:49:26 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:26.977514 | orchestrator | 2025-09-10 00:49:26 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:26.979412 | orchestrator | 2025-09-10 00:49:26 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:26.979723 | orchestrator | 2025-09-10 00:49:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:30.054301 | orchestrator | 2025-09-10 00:49:30 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:30.057385 | orchestrator | 2025-09-10 00:49:30 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:30.060077 | orchestrator | 2025-09-10 00:49:30 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:30.061842 | orchestrator | 2025-09-10 00:49:30 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:30.061866 | orchestrator | 2025-09-10 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:33.104759 | orchestrator | 2025-09-10 00:49:33 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:33.106503 | orchestrator | 2025-09-10 00:49:33 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:33.108763 | orchestrator | 2025-09-10 00:49:33 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:33.110977 | orchestrator | 2025-09-10 00:49:33 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:33.111255 | orchestrator | 2025-09-10 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:36.147024 | orchestrator | 2025-09-10 00:49:36 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:36.147817 | orchestrator | 2025-09-10 00:49:36 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:36.149598 | orchestrator | 2025-09-10 00:49:36 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:36.150662 | orchestrator | 2025-09-10 00:49:36 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:36.150759 | orchestrator | 2025-09-10 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:39.187663 | orchestrator | 2025-09-10 00:49:39 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:39.189368 | orchestrator | 2025-09-10 00:49:39 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:39.191418 | orchestrator | 2025-09-10 00:49:39 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:39.194254 | orchestrator | 2025-09-10 00:49:39 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:39.194280 | orchestrator | 2025-09-10 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:42.256016 | orchestrator | 2025-09-10 00:49:42 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:42.257077 | orchestrator | 2025-09-10 00:49:42 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:42.260066 | orchestrator | 2025-09-10 00:49:42 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:42.262132 | orchestrator | 2025-09-10 00:49:42 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:42.262639 | orchestrator | 2025-09-10 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:45.300971 | orchestrator | 2025-09-10 00:49:45 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:45.302775 | orchestrator | 2025-09-10 00:49:45 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:45.305231 | orchestrator | 2025-09-10 00:49:45 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:45.307745 | orchestrator | 2025-09-10 00:49:45 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:45.307868 | orchestrator | 2025-09-10 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:48.342313 | orchestrator | 2025-09-10 00:49:48 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:48.344209 | orchestrator | 2025-09-10 00:49:48 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:48.346101 | orchestrator | 2025-09-10 00:49:48 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:48.347589 | orchestrator | 2025-09-10 00:49:48 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:48.347612 | orchestrator | 2025-09-10 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:51.388627 | orchestrator | 2025-09-10 00:49:51 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:51.389732 | orchestrator | 2025-09-10 00:49:51 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:51.390908 | orchestrator | 2025-09-10 00:49:51 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:51.392860 | orchestrator | 2025-09-10 00:49:51 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:51.392885 | orchestrator | 2025-09-10 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:54.426865 | orchestrator | 2025-09-10 00:49:54 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state STARTED 2025-09-10 00:49:54.427022 | orchestrator | 2025-09-10 00:49:54 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:54.427111 | orchestrator | 2025-09-10 00:49:54 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:54.427811 | orchestrator | 2025-09-10 00:49:54 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:54.428070 | orchestrator | 2025-09-10 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:49:57.456766 | orchestrator | 2025-09-10 00:49:57 | INFO  | Task a67a1127-092e-4986-b868-18f92804abb6 is in state SUCCESS 2025-09-10 00:49:57.458387 | orchestrator | 2025-09-10 00:49:57.458431 | orchestrator | 2025-09-10 00:49:57.458440 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-10 00:49:57.458450 | orchestrator | 2025-09-10 00:49:57.458464 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-10 00:49:57.458473 | orchestrator | Wednesday 10 September 2025 00:48:38 +0000 (0:00:00.138) 0:00:00.138 *** 2025-09-10 00:49:57.458481 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-10 00:49:57.458490 | orchestrator | 2025-09-10 00:49:57.458498 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-10 00:49:57.458506 | orchestrator | Wednesday 10 September 2025 00:48:39 +0000 (0:00:00.723) 0:00:00.862 *** 2025-09-10 00:49:57.458514 | orchestrator | changed: [testbed-manager] 2025-09-10 00:49:57.458522 | orchestrator | 2025-09-10 00:49:57.458530 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-10 00:49:57.458538 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:00.964) 0:00:01.826 *** 2025-09-10 00:49:57.458546 | orchestrator | changed: [testbed-manager] 2025-09-10 00:49:57.458554 | orchestrator | 2025-09-10 00:49:57.458577 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:49:57.458587 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:49:57.458596 | orchestrator | 2025-09-10 00:49:57.458604 | orchestrator | 2025-09-10 00:49:57.458612 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:49:57.458619 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:00.396) 0:00:02.223 *** 2025-09-10 00:49:57.458627 | orchestrator | =============================================================================== 2025-09-10 00:49:57.458635 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.96s 2025-09-10 00:49:57.458643 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-09-10 00:49:57.458650 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2025-09-10 00:49:57.458658 | orchestrator | 2025-09-10 00:49:57.458666 | orchestrator | 2025-09-10 00:49:57.458673 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-10 00:49:57.458681 | orchestrator | 2025-09-10 00:49:57.458689 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-10 00:49:57.458696 | orchestrator | Wednesday 10 September 2025 00:48:38 +0000 (0:00:00.155) 0:00:00.155 *** 2025-09-10 00:49:57.458704 | orchestrator | ok: [testbed-manager] 2025-09-10 00:49:57.458713 | orchestrator | 2025-09-10 00:49:57.458721 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-10 00:49:57.458728 | orchestrator | Wednesday 10 September 2025 00:48:39 +0000 (0:00:00.724) 0:00:00.879 *** 2025-09-10 00:49:57.458736 | orchestrator | ok: [testbed-manager] 2025-09-10 00:49:57.458744 | orchestrator | 2025-09-10 00:49:57.458752 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-10 00:49:57.458760 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:00.532) 0:00:01.412 *** 2025-09-10 00:49:57.458768 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-10 00:49:57.458775 | orchestrator | 2025-09-10 00:49:57.458783 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-10 00:49:57.458791 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:00.778) 0:00:02.190 *** 2025-09-10 00:49:57.458799 | orchestrator | changed: [testbed-manager] 2025-09-10 00:49:57.458806 | orchestrator | 2025-09-10 00:49:57.458814 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-10 00:49:57.458822 | orchestrator | Wednesday 10 September 2025 00:48:41 +0000 (0:00:01.030) 0:00:03.221 *** 2025-09-10 00:49:57.458830 | orchestrator | changed: [testbed-manager] 2025-09-10 00:49:57.458838 | orchestrator | 2025-09-10 00:49:57.458846 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-10 00:49:57.458854 | orchestrator | Wednesday 10 September 2025 00:48:42 +0000 (0:00:00.827) 0:00:04.048 *** 2025-09-10 00:49:57.458862 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-10 00:49:57.458870 | orchestrator | 2025-09-10 00:49:57.458878 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-10 00:49:57.458885 | orchestrator | Wednesday 10 September 2025 00:48:44 +0000 (0:00:01.831) 0:00:05.880 *** 2025-09-10 00:49:57.458893 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-10 00:49:57.458901 | orchestrator | 2025-09-10 00:49:57.458909 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-10 00:49:57.458916 | orchestrator | Wednesday 10 September 2025 00:48:45 +0000 (0:00:00.873) 0:00:06.753 *** 2025-09-10 00:49:57.458924 | orchestrator | ok: [testbed-manager] 2025-09-10 00:49:57.458932 | orchestrator | 2025-09-10 00:49:57.458940 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-10 00:49:57.458949 | orchestrator | Wednesday 10 September 2025 00:48:45 +0000 (0:00:00.438) 0:00:07.192 *** 2025-09-10 00:49:57.458959 | orchestrator | ok: [testbed-manager] 2025-09-10 00:49:57.458968 | orchestrator | 2025-09-10 00:49:57.458983 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:49:57.458992 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:49:57.459002 | orchestrator | 2025-09-10 00:49:57.459011 | orchestrator | 2025-09-10 00:49:57.459020 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:49:57.459028 | orchestrator | Wednesday 10 September 2025 00:48:46 +0000 (0:00:00.414) 0:00:07.607 *** 2025-09-10 00:49:57.459036 | orchestrator | =============================================================================== 2025-09-10 00:49:57.459044 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.83s 2025-09-10 00:49:57.459051 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.03s 2025-09-10 00:49:57.459059 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.87s 2025-09-10 00:49:57.459076 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.83s 2025-09-10 00:49:57.459085 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-09-10 00:49:57.459096 | orchestrator | Get home directory of operator user ------------------------------------- 0.72s 2025-09-10 00:49:57.459104 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-09-10 00:49:57.459112 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-09-10 00:49:57.459119 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.41s 2025-09-10 00:49:57.459127 | orchestrator | 2025-09-10 00:49:57.459135 | orchestrator | 2025-09-10 00:49:57.459143 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-10 00:49:57.459150 | orchestrator | 2025-09-10 00:49:57.459158 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-10 00:49:57.459166 | orchestrator | Wednesday 10 September 2025 00:47:39 +0000 (0:00:00.337) 0:00:00.337 *** 2025-09-10 00:49:57.459173 | orchestrator | ok: [localhost] => { 2025-09-10 00:49:57.459181 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-10 00:49:57.459190 | orchestrator | } 2025-09-10 00:49:57.459198 | orchestrator | 2025-09-10 00:49:57.459205 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-10 00:49:57.459213 | orchestrator | Wednesday 10 September 2025 00:47:39 +0000 (0:00:00.061) 0:00:00.398 *** 2025-09-10 00:49:57.459222 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-10 00:49:57.459231 | orchestrator | ...ignoring 2025-09-10 00:49:57.459239 | orchestrator | 2025-09-10 00:49:57.459247 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-10 00:49:57.459255 | orchestrator | Wednesday 10 September 2025 00:47:42 +0000 (0:00:03.005) 0:00:03.404 *** 2025-09-10 00:49:57.459262 | orchestrator | skipping: [localhost] 2025-09-10 00:49:57.459270 | orchestrator | 2025-09-10 00:49:57.459278 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-10 00:49:57.459286 | orchestrator | Wednesday 10 September 2025 00:47:42 +0000 (0:00:00.038) 0:00:03.442 *** 2025-09-10 00:49:57.459293 | orchestrator | ok: [localhost] 2025-09-10 00:49:57.459301 | orchestrator | 2025-09-10 00:49:57.459324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:49:57.459332 | orchestrator | 2025-09-10 00:49:57.459340 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:49:57.459347 | orchestrator | Wednesday 10 September 2025 00:47:42 +0000 (0:00:00.154) 0:00:03.597 *** 2025-09-10 00:49:57.459355 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:49:57.459363 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:49:57.459370 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:49:57.459378 | orchestrator | 2025-09-10 00:49:57.459386 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:49:57.459398 | orchestrator | Wednesday 10 September 2025 00:47:43 +0000 (0:00:00.279) 0:00:03.877 *** 2025-09-10 00:49:57.459406 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-10 00:49:57.459414 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-10 00:49:57.459422 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-10 00:49:57.459429 | orchestrator | 2025-09-10 00:49:57.459437 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-10 00:49:57.459445 | orchestrator | 2025-09-10 00:49:57.459452 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-10 00:49:57.459460 | orchestrator | Wednesday 10 September 2025 00:47:43 +0000 (0:00:00.475) 0:00:04.352 *** 2025-09-10 00:49:57.459468 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:49:57.459475 | orchestrator | 2025-09-10 00:49:57.459483 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-10 00:49:57.459491 | orchestrator | Wednesday 10 September 2025 00:47:44 +0000 (0:00:00.499) 0:00:04.852 *** 2025-09-10 00:49:57.459498 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:49:57.459506 | orchestrator | 2025-09-10 00:49:57.459514 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-10 00:49:57.459521 | orchestrator | Wednesday 10 September 2025 00:47:45 +0000 (0:00:00.932) 0:00:05.784 *** 2025-09-10 00:49:57.459529 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.459537 | orchestrator | 2025-09-10 00:49:57.459545 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-10 00:49:57.459552 | orchestrator | Wednesday 10 September 2025 00:47:45 +0000 (0:00:00.329) 0:00:06.114 *** 2025-09-10 00:49:57.459560 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.459568 | orchestrator | 2025-09-10 00:49:57.459615 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-10 00:49:57.459625 | orchestrator | Wednesday 10 September 2025 00:47:45 +0000 (0:00:00.437) 0:00:06.552 *** 2025-09-10 00:49:57.459634 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.459642 | orchestrator | 2025-09-10 00:49:57.459650 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-10 00:49:57.459658 | orchestrator | Wednesday 10 September 2025 00:47:46 +0000 (0:00:01.162) 0:00:07.714 *** 2025-09-10 00:49:57.459666 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.459674 | orchestrator | 2025-09-10 00:49:57.459682 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-10 00:49:57.459690 | orchestrator | Wednesday 10 September 2025 00:47:47 +0000 (0:00:00.679) 0:00:08.394 *** 2025-09-10 00:49:57.459698 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:49:57.459717 | orchestrator | 2025-09-10 00:49:57.459725 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-10 00:49:57.459746 | orchestrator | Wednesday 10 September 2025 00:47:48 +0000 (0:00:01.354) 0:00:09.748 *** 2025-09-10 00:49:57.459755 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:49:57.459762 | orchestrator | 2025-09-10 00:49:57.459770 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-10 00:49:57.459782 | orchestrator | Wednesday 10 September 2025 00:47:50 +0000 (0:00:01.597) 0:00:11.345 *** 2025-09-10 00:49:57.459790 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.459798 | orchestrator | 2025-09-10 00:49:57.459805 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-10 00:49:57.459813 | orchestrator | Wednesday 10 September 2025 00:47:50 +0000 (0:00:00.311) 0:00:11.657 *** 2025-09-10 00:49:57.459821 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.459829 | orchestrator | 2025-09-10 00:49:57.459836 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-10 00:49:57.459844 | orchestrator | Wednesday 10 September 2025 00:47:51 +0000 (0:00:00.953) 0:00:12.610 *** 2025-09-10 00:49:57.459865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.459878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.459887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.459896 | orchestrator | 2025-09-10 00:49:57.459904 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-10 00:49:57.459912 | orchestrator | Wednesday 10 September 2025 00:47:53 +0000 (0:00:01.295) 0:00:13.906 *** 2025-09-10 00:49:57.459930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.459945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.459953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.459962 | orchestrator | 2025-09-10 00:49:57.459969 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-10 00:49:57.459977 | orchestrator | Wednesday 10 September 2025 00:47:55 +0000 (0:00:02.047) 0:00:15.954 *** 2025-09-10 00:49:57.459985 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-10 00:49:57.459993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-10 00:49:57.460001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-10 00:49:57.460009 | orchestrator | 2025-09-10 00:49:57.460016 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-10 00:49:57.460024 | orchestrator | Wednesday 10 September 2025 00:47:57 +0000 (0:00:02.009) 0:00:17.963 *** 2025-09-10 00:49:57.460032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-10 00:49:57.460039 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-10 00:49:57.460047 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-10 00:49:57.460055 | orchestrator | 2025-09-10 00:49:57.460062 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-10 00:49:57.460079 | orchestrator | Wednesday 10 September 2025 00:47:59 +0000 (0:00:02.408) 0:00:20.371 *** 2025-09-10 00:49:57.460088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-10 00:49:57.460098 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-10 00:49:57.460106 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-10 00:49:57.460114 | orchestrator | 2025-09-10 00:49:57.460122 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-10 00:49:57.460130 | orchestrator | Wednesday 10 September 2025 00:48:02 +0000 (0:00:02.485) 0:00:22.857 *** 2025-09-10 00:49:57.460137 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-10 00:49:57.460145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-10 00:49:57.460153 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-10 00:49:57.460161 | orchestrator | 2025-09-10 00:49:57.460168 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-10 00:49:57.460176 | orchestrator | Wednesday 10 September 2025 00:48:04 +0000 (0:00:02.770) 0:00:25.627 *** 2025-09-10 00:49:57.460184 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-10 00:49:57.460191 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-10 00:49:57.460199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-10 00:49:57.460207 | orchestrator | 2025-09-10 00:49:57.460214 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-10 00:49:57.460222 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:01.575) 0:00:27.202 *** 2025-09-10 00:49:57.460230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-10 00:49:57.460238 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-10 00:49:57.460246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-10 00:49:57.460253 | orchestrator | 2025-09-10 00:49:57.460261 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-10 00:49:57.460269 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:02.029) 0:00:29.232 *** 2025-09-10 00:49:57.460277 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.460285 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:49:57.460292 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:49:57.460300 | orchestrator | 2025-09-10 00:49:57.460322 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-10 00:49:57.460330 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.371) 0:00:29.603 *** 2025-09-10 00:49:57.460339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.460358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.460372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:49:57.460380 | orchestrator | 2025-09-10 00:49:57.460389 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-10 00:49:57.460396 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:01.526) 0:00:31.130 *** 2025-09-10 00:49:57.460404 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:49:57.460412 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:49:57.460420 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:49:57.460427 | orchestrator | 2025-09-10 00:49:57.460435 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-10 00:49:57.460443 | orchestrator | Wednesday 10 September 2025 00:48:11 +0000 (0:00:01.097) 0:00:32.228 *** 2025-09-10 00:49:57.460450 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:49:57.460458 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:49:57.460466 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:49:57.460474 | orchestrator | 2025-09-10 00:49:57.460481 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-10 00:49:57.460489 | orchestrator | Wednesday 10 September 2025 00:48:19 +0000 (0:00:07.630) 0:00:39.859 *** 2025-09-10 00:49:57.460497 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:49:57.460504 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:49:57.460512 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:49:57.460520 | orchestrator | 2025-09-10 00:49:57.460528 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-10 00:49:57.460535 | orchestrator | 2025-09-10 00:49:57.460543 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-10 00:49:57.460551 | orchestrator | Wednesday 10 September 2025 00:48:19 +0000 (0:00:00.441) 0:00:40.300 *** 2025-09-10 00:49:57.460558 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:49:57.460571 | orchestrator | 2025-09-10 00:49:57.460579 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-10 00:49:57.460587 | orchestrator | Wednesday 10 September 2025 00:48:20 +0000 (0:00:00.565) 0:00:40.866 *** 2025-09-10 00:49:57.460594 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:49:57.460602 | orchestrator | 2025-09-10 00:49:57.460610 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-10 00:49:57.460618 | orchestrator | Wednesday 10 September 2025 00:48:20 +0000 (0:00:00.219) 0:00:41.085 *** 2025-09-10 00:49:57.460625 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:49:57.460633 | orchestrator | 2025-09-10 00:49:57.460641 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-10 00:49:57.460648 | orchestrator | Wednesday 10 September 2025 00:48:22 +0000 (0:00:01.786) 0:00:42.871 *** 2025-09-10 00:49:57.460656 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:49:57.460664 | orchestrator | 2025-09-10 00:49:57.460672 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-10 00:49:57.460679 | orchestrator | 2025-09-10 00:49:57.460687 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-10 00:49:57.460695 | orchestrator | Wednesday 10 September 2025 00:49:16 +0000 (0:00:54.201) 0:01:37.073 *** 2025-09-10 00:49:57.460702 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:49:57.460710 | orchestrator | 2025-09-10 00:49:57.460718 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-10 00:49:57.460726 | orchestrator | Wednesday 10 September 2025 00:49:16 +0000 (0:00:00.651) 0:01:37.724 *** 2025-09-10 00:49:57.460757 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:49:57.460766 | orchestrator | 2025-09-10 00:49:57.460774 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-10 00:49:57.460782 | orchestrator | Wednesday 10 September 2025 00:49:17 +0000 (0:00:00.400) 0:01:38.125 *** 2025-09-10 00:49:57.460790 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:49:57.460798 | orchestrator | 2025-09-10 00:49:57.460805 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-10 00:49:57.460813 | orchestrator | Wednesday 10 September 2025 00:49:19 +0000 (0:00:01.693) 0:01:39.818 *** 2025-09-10 00:49:57.460821 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:49:57.460829 | orchestrator | 2025-09-10 00:49:57.460836 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-10 00:49:57.460844 | orchestrator | 2025-09-10 00:49:57.460852 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-10 00:49:57.460860 | orchestrator | Wednesday 10 September 2025 00:49:34 +0000 (0:00:15.356) 0:01:55.175 *** 2025-09-10 00:49:57.460867 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:49:57.460875 | orchestrator | 2025-09-10 00:49:57.460887 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-10 00:49:57.460895 | orchestrator | Wednesday 10 September 2025 00:49:35 +0000 (0:00:00.627) 0:01:55.802 *** 2025-09-10 00:49:57.460903 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:49:57.460911 | orchestrator | 2025-09-10 00:49:57.460925 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-10 00:49:57.460933 | orchestrator | Wednesday 10 September 2025 00:49:35 +0000 (0:00:00.237) 0:01:56.040 *** 2025-09-10 00:49:57.460941 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:49:57.460949 | orchestrator | 2025-09-10 00:49:57.460956 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-10 00:49:57.460964 | orchestrator | Wednesday 10 September 2025 00:49:37 +0000 (0:00:02.261) 0:01:58.302 *** 2025-09-10 00:49:57.460972 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:49:57.460980 | orchestrator | 2025-09-10 00:49:57.460987 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-10 00:49:57.460995 | orchestrator | 2025-09-10 00:49:57.461003 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-10 00:49:57.461011 | orchestrator | Wednesday 10 September 2025 00:49:51 +0000 (0:00:14.191) 0:02:12.494 *** 2025-09-10 00:49:57.461024 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:49:57.461031 | orchestrator | 2025-09-10 00:49:57.461039 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-10 00:49:57.461047 | orchestrator | Wednesday 10 September 2025 00:49:52 +0000 (0:00:00.507) 0:02:13.002 *** 2025-09-10 00:49:57.461055 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-10 00:49:57.461062 | orchestrator | enable_outward_rabbitmq_True 2025-09-10 00:49:57.461070 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-10 00:49:57.461078 | orchestrator | outward_rabbitmq_restart 2025-09-10 00:49:57.461086 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:49:57.461093 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:49:57.461101 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:49:57.461109 | orchestrator | 2025-09-10 00:49:57.461117 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-10 00:49:57.461124 | orchestrator | skipping: no hosts matched 2025-09-10 00:49:57.461132 | orchestrator | 2025-09-10 00:49:57.461149 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-10 00:49:57.461157 | orchestrator | skipping: no hosts matched 2025-09-10 00:49:57.461165 | orchestrator | 2025-09-10 00:49:57.461173 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-10 00:49:57.461180 | orchestrator | skipping: no hosts matched 2025-09-10 00:49:57.461188 | orchestrator | 2025-09-10 00:49:57.461196 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:49:57.461204 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-10 00:49:57.461212 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-10 00:49:57.461220 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:49:57.461227 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 00:49:57.461235 | orchestrator | 2025-09-10 00:49:57.461243 | orchestrator | 2025-09-10 00:49:57.461251 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:49:57.461259 | orchestrator | Wednesday 10 September 2025 00:49:54 +0000 (0:00:02.695) 0:02:15.697 *** 2025-09-10 00:49:57.461266 | orchestrator | =============================================================================== 2025-09-10 00:49:57.461274 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.75s 2025-09-10 00:49:57.461282 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.63s 2025-09-10 00:49:57.461289 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.74s 2025-09-10 00:49:57.461297 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.01s 2025-09-10 00:49:57.461356 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.77s 2025-09-10 00:49:57.461365 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.70s 2025-09-10 00:49:57.461373 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.49s 2025-09-10 00:49:57.461380 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.41s 2025-09-10 00:49:57.461388 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.05s 2025-09-10 00:49:57.461396 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.03s 2025-09-10 00:49:57.461403 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.01s 2025-09-10 00:49:57.461411 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.84s 2025-09-10 00:49:57.461424 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.60s 2025-09-10 00:49:57.461432 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.58s 2025-09-10 00:49:57.461439 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.53s 2025-09-10 00:49:57.461447 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.35s 2025-09-10 00:49:57.461455 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.30s 2025-09-10 00:49:57.461467 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.16s 2025-09-10 00:49:57.461475 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.10s 2025-09-10 00:49:57.461486 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 0.95s 2025-09-10 00:49:57.461581 | orchestrator | 2025-09-10 00:49:57 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:49:57.461592 | orchestrator | 2025-09-10 00:49:57 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:49:57.461757 | orchestrator | 2025-09-10 00:49:57 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:49:57.461851 | orchestrator | 2025-09-10 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:00.507951 | orchestrator | 2025-09-10 00:50:00 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:00.509489 | orchestrator | 2025-09-10 00:50:00 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:00.513024 | orchestrator | 2025-09-10 00:50:00 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:00.514234 | orchestrator | 2025-09-10 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:03.548990 | orchestrator | 2025-09-10 00:50:03 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:03.549085 | orchestrator | 2025-09-10 00:50:03 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:03.549096 | orchestrator | 2025-09-10 00:50:03 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:03.549106 | orchestrator | 2025-09-10 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:06.598615 | orchestrator | 2025-09-10 00:50:06 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:06.598827 | orchestrator | 2025-09-10 00:50:06 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:06.599848 | orchestrator | 2025-09-10 00:50:06 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:06.599930 | orchestrator | 2025-09-10 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:09.638398 | orchestrator | 2025-09-10 00:50:09 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:09.639905 | orchestrator | 2025-09-10 00:50:09 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:09.645158 | orchestrator | 2025-09-10 00:50:09 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:09.645278 | orchestrator | 2025-09-10 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:12.686548 | orchestrator | 2025-09-10 00:50:12 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:12.686762 | orchestrator | 2025-09-10 00:50:12 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:12.687759 | orchestrator | 2025-09-10 00:50:12 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:12.687811 | orchestrator | 2025-09-10 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:15.719673 | orchestrator | 2025-09-10 00:50:15 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:15.720043 | orchestrator | 2025-09-10 00:50:15 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:15.721150 | orchestrator | 2025-09-10 00:50:15 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:15.721178 | orchestrator | 2025-09-10 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:18.756349 | orchestrator | 2025-09-10 00:50:18 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:18.760089 | orchestrator | 2025-09-10 00:50:18 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:18.760932 | orchestrator | 2025-09-10 00:50:18 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:18.761015 | orchestrator | 2025-09-10 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:21.800256 | orchestrator | 2025-09-10 00:50:21 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:21.802502 | orchestrator | 2025-09-10 00:50:21 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:21.804435 | orchestrator | 2025-09-10 00:50:21 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:21.805125 | orchestrator | 2025-09-10 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:24.830538 | orchestrator | 2025-09-10 00:50:24 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:24.832242 | orchestrator | 2025-09-10 00:50:24 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:24.834198 | orchestrator | 2025-09-10 00:50:24 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:24.834931 | orchestrator | 2025-09-10 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:27.877335 | orchestrator | 2025-09-10 00:50:27 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:27.877437 | orchestrator | 2025-09-10 00:50:27 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:27.877452 | orchestrator | 2025-09-10 00:50:27 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:27.877464 | orchestrator | 2025-09-10 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:30.921525 | orchestrator | 2025-09-10 00:50:30 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:30.923298 | orchestrator | 2025-09-10 00:50:30 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:30.925029 | orchestrator | 2025-09-10 00:50:30 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:30.925059 | orchestrator | 2025-09-10 00:50:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:33.971226 | orchestrator | 2025-09-10 00:50:33 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:33.971544 | orchestrator | 2025-09-10 00:50:33 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:33.972849 | orchestrator | 2025-09-10 00:50:33 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:33.972905 | orchestrator | 2025-09-10 00:50:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:37.023369 | orchestrator | 2025-09-10 00:50:37 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:37.024539 | orchestrator | 2025-09-10 00:50:37 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:37.026498 | orchestrator | 2025-09-10 00:50:37 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:37.026625 | orchestrator | 2025-09-10 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:40.067678 | orchestrator | 2025-09-10 00:50:40 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:40.067776 | orchestrator | 2025-09-10 00:50:40 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:40.068355 | orchestrator | 2025-09-10 00:50:40 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:40.068378 | orchestrator | 2025-09-10 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:43.103375 | orchestrator | 2025-09-10 00:50:43 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:43.104702 | orchestrator | 2025-09-10 00:50:43 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:43.108429 | orchestrator | 2025-09-10 00:50:43 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:43.108482 | orchestrator | 2025-09-10 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:46.154990 | orchestrator | 2025-09-10 00:50:46 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:46.161493 | orchestrator | 2025-09-10 00:50:46 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:46.163364 | orchestrator | 2025-09-10 00:50:46 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:46.163395 | orchestrator | 2025-09-10 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:49.207762 | orchestrator | 2025-09-10 00:50:49 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:49.209195 | orchestrator | 2025-09-10 00:50:49 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:49.210437 | orchestrator | 2025-09-10 00:50:49 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state STARTED 2025-09-10 00:50:49.210491 | orchestrator | 2025-09-10 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:52.265828 | orchestrator | 2025-09-10 00:50:52 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:52.267431 | orchestrator | 2025-09-10 00:50:52 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:52.274332 | orchestrator | 2025-09-10 00:50:52.274370 | orchestrator | 2025-09-10 00:50:52.274419 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:50:52.274432 | orchestrator | 2025-09-10 00:50:52.274444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:50:52.274455 | orchestrator | Wednesday 10 September 2025 00:48:27 +0000 (0:00:00.362) 0:00:00.362 *** 2025-09-10 00:50:52.274466 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.274479 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.274490 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.274501 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:50:52.274553 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:50:52.274566 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:50:52.274603 | orchestrator | 2025-09-10 00:50:52.274614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:50:52.274625 | orchestrator | Wednesday 10 September 2025 00:48:28 +0000 (0:00:01.102) 0:00:01.465 *** 2025-09-10 00:50:52.274636 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-10 00:50:52.274648 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-10 00:50:52.274659 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-10 00:50:52.274669 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-10 00:50:52.274680 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-10 00:50:52.274690 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-10 00:50:52.274701 | orchestrator | 2025-09-10 00:50:52.274712 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-10 00:50:52.274723 | orchestrator | 2025-09-10 00:50:52.274733 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-10 00:50:52.274744 | orchestrator | Wednesday 10 September 2025 00:48:30 +0000 (0:00:01.406) 0:00:02.872 *** 2025-09-10 00:50:52.274756 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:50:52.274768 | orchestrator | 2025-09-10 00:50:52.274779 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-10 00:50:52.274790 | orchestrator | Wednesday 10 September 2025 00:48:31 +0000 (0:00:01.751) 0:00:04.623 *** 2025-09-10 00:50:52.274803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.274817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.274828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.274839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.274850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.274951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.274985 | orchestrator | 2025-09-10 00:50:52.275013 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-10 00:50:52.275027 | orchestrator | Wednesday 10 September 2025 00:48:32 +0000 (0:00:01.112) 0:00:05.736 *** 2025-09-10 00:50:52.275040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275116 | orchestrator | 2025-09-10 00:50:52.275130 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-10 00:50:52.275143 | orchestrator | Wednesday 10 September 2025 00:48:34 +0000 (0:00:01.562) 0:00:07.298 *** 2025-09-10 00:50:52.275155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275281 | orchestrator | 2025-09-10 00:50:52.275292 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-10 00:50:52.275303 | orchestrator | Wednesday 10 September 2025 00:48:36 +0000 (0:00:02.199) 0:00:09.498 *** 2025-09-10 00:50:52.275314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275393 | orchestrator | 2025-09-10 00:50:52.275410 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-10 00:50:52.275421 | orchestrator | Wednesday 10 September 2025 00:48:38 +0000 (0:00:02.115) 0:00:11.613 *** 2025-09-10 00:50:52.275432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.275497 | orchestrator | 2025-09-10 00:50:52.275508 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-10 00:50:52.275527 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:01.715) 0:00:13.329 *** 2025-09-10 00:50:52.275538 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.275549 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.275559 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.275570 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:50:52.275581 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:50:52.275591 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:50:52.275602 | orchestrator | 2025-09-10 00:50:52.275612 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-10 00:50:52.275623 | orchestrator | Wednesday 10 September 2025 00:48:43 +0000 (0:00:03.029) 0:00:16.359 *** 2025-09-10 00:50:52.275634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-10 00:50:52.275645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-10 00:50:52.275655 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-10 00:50:52.275666 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-10 00:50:52.275676 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-10 00:50:52.275691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-10 00:50:52.275702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-10 00:50:52.275713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-10 00:50:52.275728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-10 00:50:52.275739 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-10 00:50:52.275750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-10 00:50:52.275761 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-10 00:50:52.275772 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-10 00:50:52.275783 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-10 00:50:52.275794 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-10 00:50:52.275805 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-10 00:50:52.275816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-10 00:50:52.275827 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-10 00:50:52.275837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-10 00:50:52.275848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-10 00:50:52.275859 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-10 00:50:52.275870 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-10 00:50:52.275880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-10 00:50:52.275891 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-10 00:50:52.275909 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-10 00:50:52.275920 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-10 00:50:52.275930 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-10 00:50:52.275941 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-10 00:50:52.275951 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-10 00:50:52.275961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-10 00:50:52.275972 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-10 00:50:52.275983 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-10 00:50:52.275993 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-10 00:50:52.276004 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-10 00:50:52.276014 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-10 00:50:52.276025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-10 00:50:52.276035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-10 00:50:52.276046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-10 00:50:52.276057 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-10 00:50:52.276067 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-10 00:50:52.276078 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-10 00:50:52.276088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-10 00:50:52.276104 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-10 00:50:52.276115 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-10 00:50:52.276131 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-10 00:50:52.276142 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-10 00:50:52.276153 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-10 00:50:52.276163 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-10 00:50:52.276174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-10 00:50:52.276185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-10 00:50:52.276195 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-10 00:50:52.276206 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-10 00:50:52.276299 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-10 00:50:52.276311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-10 00:50:52.276322 | orchestrator | 2025-09-10 00:50:52.276332 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-10 00:50:52.276343 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:20.498) 0:00:36.858 *** 2025-09-10 00:50:52.276354 | orchestrator | 2025-09-10 00:50:52.276365 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-10 00:50:52.276375 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:00.272) 0:00:37.130 *** 2025-09-10 00:50:52.276386 | orchestrator | 2025-09-10 00:50:52.276397 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-10 00:50:52.276407 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:00.093) 0:00:37.224 *** 2025-09-10 00:50:52.276418 | orchestrator | 2025-09-10 00:50:52.276429 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-10 00:50:52.276440 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:00.067) 0:00:37.291 *** 2025-09-10 00:50:52.276450 | orchestrator | 2025-09-10 00:50:52.276461 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-10 00:50:52.276471 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:00.082) 0:00:37.373 *** 2025-09-10 00:50:52.276482 | orchestrator | 2025-09-10 00:50:52.276493 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-10 00:50:52.276503 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:00.068) 0:00:37.442 *** 2025-09-10 00:50:52.276514 | orchestrator | 2025-09-10 00:50:52.276525 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-10 00:50:52.276535 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:00.067) 0:00:37.510 *** 2025-09-10 00:50:52.276546 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.276557 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.276568 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:50:52.276579 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:50:52.276589 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.276600 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:50:52.276611 | orchestrator | 2025-09-10 00:50:52.276621 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-10 00:50:52.276632 | orchestrator | Wednesday 10 September 2025 00:49:06 +0000 (0:00:01.699) 0:00:39.210 *** 2025-09-10 00:50:52.276643 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.276654 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.276665 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:50:52.276676 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:50:52.276686 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.276697 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:50:52.276708 | orchestrator | 2025-09-10 00:50:52.276718 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-10 00:50:52.276729 | orchestrator | 2025-09-10 00:50:52.276740 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-10 00:50:52.276750 | orchestrator | Wednesday 10 September 2025 00:49:43 +0000 (0:00:36.743) 0:01:15.953 *** 2025-09-10 00:50:52.276761 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:50:52.276772 | orchestrator | 2025-09-10 00:50:52.276782 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-10 00:50:52.276793 | orchestrator | Wednesday 10 September 2025 00:49:43 +0000 (0:00:00.752) 0:01:16.706 *** 2025-09-10 00:50:52.276804 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:50:52.276815 | orchestrator | 2025-09-10 00:50:52.276826 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-10 00:50:52.276846 | orchestrator | Wednesday 10 September 2025 00:49:44 +0000 (0:00:00.539) 0:01:17.245 *** 2025-09-10 00:50:52.276857 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.276873 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.276884 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.276895 | orchestrator | 2025-09-10 00:50:52.276905 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-10 00:50:52.276916 | orchestrator | Wednesday 10 September 2025 00:49:45 +0000 (0:00:00.908) 0:01:18.154 *** 2025-09-10 00:50:52.276927 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.276938 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.276948 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.276964 | orchestrator | 2025-09-10 00:50:52.276975 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-10 00:50:52.276986 | orchestrator | Wednesday 10 September 2025 00:49:45 +0000 (0:00:00.326) 0:01:18.480 *** 2025-09-10 00:50:52.276997 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.277008 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.277019 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.277029 | orchestrator | 2025-09-10 00:50:52.277040 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-10 00:50:52.277051 | orchestrator | Wednesday 10 September 2025 00:49:46 +0000 (0:00:00.312) 0:01:18.793 *** 2025-09-10 00:50:52.277061 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.277072 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.277082 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.277093 | orchestrator | 2025-09-10 00:50:52.277103 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-10 00:50:52.277114 | orchestrator | Wednesday 10 September 2025 00:49:46 +0000 (0:00:00.312) 0:01:19.105 *** 2025-09-10 00:50:52.277125 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.277135 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.277146 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.277156 | orchestrator | 2025-09-10 00:50:52.277167 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-10 00:50:52.277178 | orchestrator | Wednesday 10 September 2025 00:49:46 +0000 (0:00:00.557) 0:01:19.663 *** 2025-09-10 00:50:52.277188 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277199 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277210 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277235 | orchestrator | 2025-09-10 00:50:52.277246 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-10 00:50:52.277257 | orchestrator | Wednesday 10 September 2025 00:49:47 +0000 (0:00:00.301) 0:01:19.965 *** 2025-09-10 00:50:52.277268 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277279 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277289 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277300 | orchestrator | 2025-09-10 00:50:52.277311 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-10 00:50:52.277322 | orchestrator | Wednesday 10 September 2025 00:49:47 +0000 (0:00:00.303) 0:01:20.268 *** 2025-09-10 00:50:52.277332 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277343 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277354 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277364 | orchestrator | 2025-09-10 00:50:52.277375 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-10 00:50:52.277386 | orchestrator | Wednesday 10 September 2025 00:49:47 +0000 (0:00:00.314) 0:01:20.583 *** 2025-09-10 00:50:52.277397 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277407 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277418 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277429 | orchestrator | 2025-09-10 00:50:52.277440 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-10 00:50:52.277450 | orchestrator | Wednesday 10 September 2025 00:49:48 +0000 (0:00:00.581) 0:01:21.165 *** 2025-09-10 00:50:52.277469 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277480 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277490 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277501 | orchestrator | 2025-09-10 00:50:52.277512 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-10 00:50:52.277523 | orchestrator | Wednesday 10 September 2025 00:49:48 +0000 (0:00:00.346) 0:01:21.511 *** 2025-09-10 00:50:52.277533 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277544 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277555 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277566 | orchestrator | 2025-09-10 00:50:52.277576 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-10 00:50:52.277587 | orchestrator | Wednesday 10 September 2025 00:49:49 +0000 (0:00:00.319) 0:01:21.830 *** 2025-09-10 00:50:52.277598 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277609 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277619 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277630 | orchestrator | 2025-09-10 00:50:52.277641 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-10 00:50:52.277652 | orchestrator | Wednesday 10 September 2025 00:49:49 +0000 (0:00:00.303) 0:01:22.134 *** 2025-09-10 00:50:52.277663 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277673 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277684 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277694 | orchestrator | 2025-09-10 00:50:52.277705 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-10 00:50:52.277716 | orchestrator | Wednesday 10 September 2025 00:49:49 +0000 (0:00:00.538) 0:01:22.672 *** 2025-09-10 00:50:52.277726 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277737 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277748 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277759 | orchestrator | 2025-09-10 00:50:52.277770 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-10 00:50:52.277780 | orchestrator | Wednesday 10 September 2025 00:49:50 +0000 (0:00:00.302) 0:01:22.975 *** 2025-09-10 00:50:52.277791 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277802 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277812 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277823 | orchestrator | 2025-09-10 00:50:52.277833 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-10 00:50:52.277844 | orchestrator | Wednesday 10 September 2025 00:49:50 +0000 (0:00:00.310) 0:01:23.286 *** 2025-09-10 00:50:52.277855 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277865 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277885 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277896 | orchestrator | 2025-09-10 00:50:52.277907 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-10 00:50:52.277917 | orchestrator | Wednesday 10 September 2025 00:49:50 +0000 (0:00:00.284) 0:01:23.570 *** 2025-09-10 00:50:52.277928 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.277939 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.277955 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.277966 | orchestrator | 2025-09-10 00:50:52.277977 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-10 00:50:52.277988 | orchestrator | Wednesday 10 September 2025 00:49:51 +0000 (0:00:00.301) 0:01:23.872 *** 2025-09-10 00:50:52.277999 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:50:52.278009 | orchestrator | 2025-09-10 00:50:52.278072 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-10 00:50:52.278083 | orchestrator | Wednesday 10 September 2025 00:49:52 +0000 (0:00:00.868) 0:01:24.741 *** 2025-09-10 00:50:52.278094 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.278113 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.278124 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.278134 | orchestrator | 2025-09-10 00:50:52.278145 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-10 00:50:52.278156 | orchestrator | Wednesday 10 September 2025 00:49:52 +0000 (0:00:00.562) 0:01:25.304 *** 2025-09-10 00:50:52.278167 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.278178 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.278188 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.278199 | orchestrator | 2025-09-10 00:50:52.278210 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-10 00:50:52.278247 | orchestrator | Wednesday 10 September 2025 00:49:53 +0000 (0:00:00.751) 0:01:26.055 *** 2025-09-10 00:50:52.278259 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.278270 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.278280 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.278291 | orchestrator | 2025-09-10 00:50:52.278302 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-10 00:50:52.278312 | orchestrator | Wednesday 10 September 2025 00:49:53 +0000 (0:00:00.675) 0:01:26.731 *** 2025-09-10 00:50:52.278323 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.278334 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.278344 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.278355 | orchestrator | 2025-09-10 00:50:52.278365 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-10 00:50:52.278376 | orchestrator | Wednesday 10 September 2025 00:49:54 +0000 (0:00:00.557) 0:01:27.288 *** 2025-09-10 00:50:52.278387 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.278397 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.278408 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.278419 | orchestrator | 2025-09-10 00:50:52.278429 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-10 00:50:52.278440 | orchestrator | Wednesday 10 September 2025 00:49:55 +0000 (0:00:00.463) 0:01:27.751 *** 2025-09-10 00:50:52.278450 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.278461 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.278472 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.278482 | orchestrator | 2025-09-10 00:50:52.278493 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-10 00:50:52.278504 | orchestrator | Wednesday 10 September 2025 00:49:55 +0000 (0:00:00.360) 0:01:28.112 *** 2025-09-10 00:50:52.278514 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.278525 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.278536 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.278546 | orchestrator | 2025-09-10 00:50:52.278557 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-10 00:50:52.278568 | orchestrator | Wednesday 10 September 2025 00:49:55 +0000 (0:00:00.546) 0:01:28.659 *** 2025-09-10 00:50:52.278579 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.278589 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.278600 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.278610 | orchestrator | 2025-09-10 00:50:52.278621 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-10 00:50:52.278632 | orchestrator | Wednesday 10 September 2025 00:49:56 +0000 (0:00:00.346) 0:01:29.005 *** 2025-09-10 00:50:52.278643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278777 | orchestrator | 2025-09-10 00:50:52.278788 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-10 00:50:52.278799 | orchestrator | Wednesday 10 September 2025 00:49:57 +0000 (0:00:01.304) 0:01:30.310 *** 2025-09-10 00:50:52.278810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278928 | orchestrator | 2025-09-10 00:50:52.278938 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-10 00:50:52.278949 | orchestrator | Wednesday 10 September 2025 00:50:01 +0000 (0:00:04.021) 0:01:34.331 *** 2025-09-10 00:50:52.278960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.278990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279107 | orchestrator | 2025-09-10 00:50:52.279118 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-10 00:50:52.279129 | orchestrator | Wednesday 10 September 2025 00:50:03 +0000 (0:00:02.035) 0:01:36.366 *** 2025-09-10 00:50:52.279140 | orchestrator | 2025-09-10 00:50:52.279150 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-10 00:50:52.279161 | orchestrator | Wednesday 10 September 2025 00:50:03 +0000 (0:00:00.071) 0:01:36.438 *** 2025-09-10 00:50:52.279171 | orchestrator | 2025-09-10 00:50:52.279182 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-10 00:50:52.279193 | orchestrator | Wednesday 10 September 2025 00:50:03 +0000 (0:00:00.069) 0:01:36.507 *** 2025-09-10 00:50:52.279204 | orchestrator | 2025-09-10 00:50:52.279230 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-10 00:50:52.279241 | orchestrator | Wednesday 10 September 2025 00:50:03 +0000 (0:00:00.072) 0:01:36.580 *** 2025-09-10 00:50:52.279261 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.279272 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.279282 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.279293 | orchestrator | 2025-09-10 00:50:52.279304 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-10 00:50:52.279315 | orchestrator | Wednesday 10 September 2025 00:50:06 +0000 (0:00:02.543) 0:01:39.124 *** 2025-09-10 00:50:52.279325 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.279336 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.279347 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.279357 | orchestrator | 2025-09-10 00:50:52.279368 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-10 00:50:52.279378 | orchestrator | Wednesday 10 September 2025 00:50:08 +0000 (0:00:02.351) 0:01:41.476 *** 2025-09-10 00:50:52.279389 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.279400 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.279411 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.279421 | orchestrator | 2025-09-10 00:50:52.279432 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-10 00:50:52.279442 | orchestrator | Wednesday 10 September 2025 00:50:11 +0000 (0:00:02.603) 0:01:44.079 *** 2025-09-10 00:50:52.279453 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.279464 | orchestrator | 2025-09-10 00:50:52.279474 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-10 00:50:52.279485 | orchestrator | Wednesday 10 September 2025 00:50:11 +0000 (0:00:00.357) 0:01:44.437 *** 2025-09-10 00:50:52.279496 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.279507 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.279517 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.279528 | orchestrator | 2025-09-10 00:50:52.279538 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-10 00:50:52.279549 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:00.835) 0:01:45.272 *** 2025-09-10 00:50:52.279560 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.279570 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.279581 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.279592 | orchestrator | 2025-09-10 00:50:52.279602 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-10 00:50:52.279613 | orchestrator | Wednesday 10 September 2025 00:50:13 +0000 (0:00:00.585) 0:01:45.857 *** 2025-09-10 00:50:52.279623 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.279634 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.279645 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.279655 | orchestrator | 2025-09-10 00:50:52.279666 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-10 00:50:52.279677 | orchestrator | Wednesday 10 September 2025 00:50:13 +0000 (0:00:00.754) 0:01:46.612 *** 2025-09-10 00:50:52.279687 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.279703 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.279714 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.279725 | orchestrator | 2025-09-10 00:50:52.279735 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-10 00:50:52.279746 | orchestrator | Wednesday 10 September 2025 00:50:14 +0000 (0:00:00.685) 0:01:47.297 *** 2025-09-10 00:50:52.279757 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.279768 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.279785 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.279796 | orchestrator | 2025-09-10 00:50:52.279806 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-10 00:50:52.279817 | orchestrator | Wednesday 10 September 2025 00:50:15 +0000 (0:00:01.402) 0:01:48.700 *** 2025-09-10 00:50:52.279828 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.279839 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.279850 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.279867 | orchestrator | 2025-09-10 00:50:52.279878 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-10 00:50:52.279888 | orchestrator | Wednesday 10 September 2025 00:50:17 +0000 (0:00:01.076) 0:01:49.777 *** 2025-09-10 00:50:52.279899 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.279910 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.279920 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.279931 | orchestrator | 2025-09-10 00:50:52.279942 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-10 00:50:52.279952 | orchestrator | Wednesday 10 September 2025 00:50:17 +0000 (0:00:00.365) 0:01:50.142 *** 2025-09-10 00:50:52.279963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279975 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.279997 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280008 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280030 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280042 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280065 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280083 | orchestrator | 2025-09-10 00:50:52.280094 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-10 00:50:52.280105 | orchestrator | Wednesday 10 September 2025 00:50:18 +0000 (0:00:01.455) 0:01:51.598 *** 2025-09-10 00:50:52.280116 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280127 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280138 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280235 | orchestrator | 2025-09-10 00:50:52.280252 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-10 00:50:52.280263 | orchestrator | Wednesday 10 September 2025 00:50:23 +0000 (0:00:04.759) 0:01:56.357 *** 2025-09-10 00:50:52.280280 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280292 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280325 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280358 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 00:50:52.280390 | orchestrator | 2025-09-10 00:50:52.280401 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-10 00:50:52.280412 | orchestrator | Wednesday 10 September 2025 00:50:26 +0000 (0:00:02.812) 0:01:59.170 *** 2025-09-10 00:50:52.280423 | orchestrator | 2025-09-10 00:50:52.280434 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-10 00:50:52.280444 | orchestrator | Wednesday 10 September 2025 00:50:26 +0000 (0:00:00.068) 0:01:59.239 *** 2025-09-10 00:50:52.280455 | orchestrator | 2025-09-10 00:50:52.280466 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-10 00:50:52.280481 | orchestrator | Wednesday 10 September 2025 00:50:26 +0000 (0:00:00.069) 0:01:59.308 *** 2025-09-10 00:50:52.280492 | orchestrator | 2025-09-10 00:50:52.280503 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-10 00:50:52.280514 | orchestrator | Wednesday 10 September 2025 00:50:26 +0000 (0:00:00.063) 0:01:59.372 *** 2025-09-10 00:50:52.280524 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.280535 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.280546 | orchestrator | 2025-09-10 00:50:52.280563 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-10 00:50:52.280574 | orchestrator | Wednesday 10 September 2025 00:50:33 +0000 (0:00:06.433) 0:02:05.805 *** 2025-09-10 00:50:52.280584 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.280595 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.280606 | orchestrator | 2025-09-10 00:50:52.280616 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-10 00:50:52.280627 | orchestrator | Wednesday 10 September 2025 00:50:39 +0000 (0:00:06.137) 0:02:11.943 *** 2025-09-10 00:50:52.280638 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:50:52.280649 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:50:52.280660 | orchestrator | 2025-09-10 00:50:52.280670 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-10 00:50:52.280681 | orchestrator | Wednesday 10 September 2025 00:50:46 +0000 (0:00:07.226) 0:02:19.170 *** 2025-09-10 00:50:52.280692 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:50:52.280703 | orchestrator | 2025-09-10 00:50:52.280713 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-10 00:50:52.280724 | orchestrator | Wednesday 10 September 2025 00:50:46 +0000 (0:00:00.141) 0:02:19.311 *** 2025-09-10 00:50:52.280735 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.280746 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.280756 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.280767 | orchestrator | 2025-09-10 00:50:52.280778 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-10 00:50:52.280789 | orchestrator | Wednesday 10 September 2025 00:50:47 +0000 (0:00:00.825) 0:02:20.136 *** 2025-09-10 00:50:52.280799 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.280810 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.280821 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.280832 | orchestrator | 2025-09-10 00:50:52.280842 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-10 00:50:52.280853 | orchestrator | Wednesday 10 September 2025 00:50:48 +0000 (0:00:00.636) 0:02:20.773 *** 2025-09-10 00:50:52.280864 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.280875 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.280885 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.280896 | orchestrator | 2025-09-10 00:50:52.280907 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-10 00:50:52.280917 | orchestrator | Wednesday 10 September 2025 00:50:48 +0000 (0:00:00.769) 0:02:21.542 *** 2025-09-10 00:50:52.280928 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:50:52.280939 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:50:52.280956 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:50:52.280967 | orchestrator | 2025-09-10 00:50:52.280977 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-10 00:50:52.280988 | orchestrator | Wednesday 10 September 2025 00:50:49 +0000 (0:00:00.663) 0:02:22.206 *** 2025-09-10 00:50:52.280999 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.281010 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.281020 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.281031 | orchestrator | 2025-09-10 00:50:52.281042 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-10 00:50:52.281052 | orchestrator | Wednesday 10 September 2025 00:50:50 +0000 (0:00:00.761) 0:02:22.968 *** 2025-09-10 00:50:52.281063 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:50:52.281074 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:50:52.281084 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:50:52.281095 | orchestrator | 2025-09-10 00:50:52.281105 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:50:52.281116 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-10 00:50:52.281128 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-10 00:50:52.281139 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-10 00:50:52.281150 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:50:52.281161 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:50:52.281172 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:50:52.281182 | orchestrator | 2025-09-10 00:50:52.281193 | orchestrator | 2025-09-10 00:50:52.281204 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:50:52.281261 | orchestrator | Wednesday 10 September 2025 00:50:51 +0000 (0:00:00.868) 0:02:23.836 *** 2025-09-10 00:50:52.281274 | orchestrator | =============================================================================== 2025-09-10 00:50:52.281285 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.74s 2025-09-10 00:50:52.281296 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.50s 2025-09-10 00:50:52.281306 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.83s 2025-09-10 00:50:52.281322 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.98s 2025-09-10 00:50:52.281333 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.49s 2025-09-10 00:50:52.281344 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.76s 2025-09-10 00:50:52.281355 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.02s 2025-09-10 00:50:52.281372 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.03s 2025-09-10 00:50:52.281383 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.81s 2025-09-10 00:50:52.281394 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.20s 2025-09-10 00:50:52.281404 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.12s 2025-09-10 00:50:52.281415 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.04s 2025-09-10 00:50:52.281426 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.75s 2025-09-10 00:50:52.281436 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.72s 2025-09-10 00:50:52.281454 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.70s 2025-09-10 00:50:52.281465 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.56s 2025-09-10 00:50:52.281475 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-09-10 00:50:52.281486 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2025-09-10 00:50:52.281496 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.40s 2025-09-10 00:50:52.281507 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.30s 2025-09-10 00:50:52.281518 | orchestrator | 2025-09-10 00:50:52 | INFO  | Task 03706a7e-02cf-4713-9bb4-57191532b70c is in state SUCCESS 2025-09-10 00:50:52.281529 | orchestrator | 2025-09-10 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:55.312569 | orchestrator | 2025-09-10 00:50:55 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:55.313589 | orchestrator | 2025-09-10 00:50:55 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:55.313637 | orchestrator | 2025-09-10 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:50:58.358564 | orchestrator | 2025-09-10 00:50:58 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:50:58.359714 | orchestrator | 2025-09-10 00:50:58 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:50:58.360007 | orchestrator | 2025-09-10 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:01.402178 | orchestrator | 2025-09-10 00:51:01 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:01.404423 | orchestrator | 2025-09-10 00:51:01 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:01.404586 | orchestrator | 2025-09-10 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:04.441659 | orchestrator | 2025-09-10 00:51:04 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:04.443105 | orchestrator | 2025-09-10 00:51:04 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:04.443564 | orchestrator | 2025-09-10 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:07.493641 | orchestrator | 2025-09-10 00:51:07 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:07.495357 | orchestrator | 2025-09-10 00:51:07 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:07.495389 | orchestrator | 2025-09-10 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:10.546763 | orchestrator | 2025-09-10 00:51:10 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:10.548169 | orchestrator | 2025-09-10 00:51:10 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:10.548404 | orchestrator | 2025-09-10 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:13.600483 | orchestrator | 2025-09-10 00:51:13 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:13.600842 | orchestrator | 2025-09-10 00:51:13 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:13.600934 | orchestrator | 2025-09-10 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:16.663228 | orchestrator | 2025-09-10 00:51:16 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:16.664576 | orchestrator | 2025-09-10 00:51:16 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:16.664646 | orchestrator | 2025-09-10 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:19.704568 | orchestrator | 2025-09-10 00:51:19 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:19.704671 | orchestrator | 2025-09-10 00:51:19 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:19.704685 | orchestrator | 2025-09-10 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:22.749682 | orchestrator | 2025-09-10 00:51:22 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:22.751656 | orchestrator | 2025-09-10 00:51:22 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:22.751688 | orchestrator | 2025-09-10 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:25.793061 | orchestrator | 2025-09-10 00:51:25 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:25.794536 | orchestrator | 2025-09-10 00:51:25 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:25.794743 | orchestrator | 2025-09-10 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:28.833540 | orchestrator | 2025-09-10 00:51:28 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:28.833970 | orchestrator | 2025-09-10 00:51:28 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:28.834000 | orchestrator | 2025-09-10 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:31.878711 | orchestrator | 2025-09-10 00:51:31 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:31.880051 | orchestrator | 2025-09-10 00:51:31 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:31.880270 | orchestrator | 2025-09-10 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:34.930357 | orchestrator | 2025-09-10 00:51:34 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:34.930874 | orchestrator | 2025-09-10 00:51:34 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:34.930904 | orchestrator | 2025-09-10 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:37.966786 | orchestrator | 2025-09-10 00:51:37 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:37.968330 | orchestrator | 2025-09-10 00:51:37 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:37.968359 | orchestrator | 2025-09-10 00:51:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:41.004028 | orchestrator | 2025-09-10 00:51:41 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:41.005559 | orchestrator | 2025-09-10 00:51:41 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:41.005594 | orchestrator | 2025-09-10 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:44.043697 | orchestrator | 2025-09-10 00:51:44 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:44.043801 | orchestrator | 2025-09-10 00:51:44 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:44.043817 | orchestrator | 2025-09-10 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:47.096632 | orchestrator | 2025-09-10 00:51:47 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:47.097705 | orchestrator | 2025-09-10 00:51:47 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:47.097821 | orchestrator | 2025-09-10 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:50.138252 | orchestrator | 2025-09-10 00:51:50 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:50.138491 | orchestrator | 2025-09-10 00:51:50 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:50.138520 | orchestrator | 2025-09-10 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:53.168942 | orchestrator | 2025-09-10 00:51:53 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:53.169039 | orchestrator | 2025-09-10 00:51:53 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:53.169147 | orchestrator | 2025-09-10 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:56.204562 | orchestrator | 2025-09-10 00:51:56 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:56.204637 | orchestrator | 2025-09-10 00:51:56 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:56.204646 | orchestrator | 2025-09-10 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:51:59.250921 | orchestrator | 2025-09-10 00:51:59 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:51:59.253644 | orchestrator | 2025-09-10 00:51:59 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:51:59.253740 | orchestrator | 2025-09-10 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:02.300658 | orchestrator | 2025-09-10 00:52:02 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:02.301032 | orchestrator | 2025-09-10 00:52:02 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:02.301064 | orchestrator | 2025-09-10 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:05.338673 | orchestrator | 2025-09-10 00:52:05 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:05.339368 | orchestrator | 2025-09-10 00:52:05 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:05.339398 | orchestrator | 2025-09-10 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:08.383321 | orchestrator | 2025-09-10 00:52:08 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:08.384488 | orchestrator | 2025-09-10 00:52:08 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:08.384519 | orchestrator | 2025-09-10 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:11.436120 | orchestrator | 2025-09-10 00:52:11 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:11.439298 | orchestrator | 2025-09-10 00:52:11 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:11.439333 | orchestrator | 2025-09-10 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:14.489554 | orchestrator | 2025-09-10 00:52:14 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:14.489911 | orchestrator | 2025-09-10 00:52:14 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:14.489942 | orchestrator | 2025-09-10 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:17.532655 | orchestrator | 2025-09-10 00:52:17 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:17.535344 | orchestrator | 2025-09-10 00:52:17 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:17.535402 | orchestrator | 2025-09-10 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:20.586331 | orchestrator | 2025-09-10 00:52:20 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:20.587855 | orchestrator | 2025-09-10 00:52:20 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:20.587989 | orchestrator | 2025-09-10 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:23.636671 | orchestrator | 2025-09-10 00:52:23 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:23.637415 | orchestrator | 2025-09-10 00:52:23 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:23.637556 | orchestrator | 2025-09-10 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:26.679006 | orchestrator | 2025-09-10 00:52:26 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:26.680529 | orchestrator | 2025-09-10 00:52:26 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:26.680564 | orchestrator | 2025-09-10 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:29.725247 | orchestrator | 2025-09-10 00:52:29 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:29.726690 | orchestrator | 2025-09-10 00:52:29 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:29.727234 | orchestrator | 2025-09-10 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:32.780682 | orchestrator | 2025-09-10 00:52:32 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:32.781084 | orchestrator | 2025-09-10 00:52:32 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:32.781238 | orchestrator | 2025-09-10 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:35.826130 | orchestrator | 2025-09-10 00:52:35 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:35.827613 | orchestrator | 2025-09-10 00:52:35 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:35.827644 | orchestrator | 2025-09-10 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:38.868152 | orchestrator | 2025-09-10 00:52:38 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:38.869703 | orchestrator | 2025-09-10 00:52:38 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:38.869734 | orchestrator | 2025-09-10 00:52:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:41.913576 | orchestrator | 2025-09-10 00:52:41 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:41.914640 | orchestrator | 2025-09-10 00:52:41 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:41.914669 | orchestrator | 2025-09-10 00:52:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:44.967345 | orchestrator | 2025-09-10 00:52:44 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:44.967577 | orchestrator | 2025-09-10 00:52:44 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:44.967631 | orchestrator | 2025-09-10 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:48.016801 | orchestrator | 2025-09-10 00:52:48 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:48.018281 | orchestrator | 2025-09-10 00:52:48 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:48.019203 | orchestrator | 2025-09-10 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:51.059842 | orchestrator | 2025-09-10 00:52:51 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:51.063238 | orchestrator | 2025-09-10 00:52:51 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:51.063282 | orchestrator | 2025-09-10 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:54.110522 | orchestrator | 2025-09-10 00:52:54 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:54.110750 | orchestrator | 2025-09-10 00:52:54 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:54.110819 | orchestrator | 2025-09-10 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:52:57.158358 | orchestrator | 2025-09-10 00:52:57 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:52:57.159872 | orchestrator | 2025-09-10 00:52:57 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:52:57.160288 | orchestrator | 2025-09-10 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:00.206351 | orchestrator | 2025-09-10 00:53:00 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:00.207061 | orchestrator | 2025-09-10 00:53:00 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:00.207120 | orchestrator | 2025-09-10 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:03.253157 | orchestrator | 2025-09-10 00:53:03 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:03.256513 | orchestrator | 2025-09-10 00:53:03 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:03.256546 | orchestrator | 2025-09-10 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:06.298453 | orchestrator | 2025-09-10 00:53:06 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:06.301432 | orchestrator | 2025-09-10 00:53:06 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:06.301463 | orchestrator | 2025-09-10 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:09.357613 | orchestrator | 2025-09-10 00:53:09 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:09.359953 | orchestrator | 2025-09-10 00:53:09 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:09.359999 | orchestrator | 2025-09-10 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:12.416285 | orchestrator | 2025-09-10 00:53:12 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:12.417149 | orchestrator | 2025-09-10 00:53:12 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:12.417215 | orchestrator | 2025-09-10 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:15.469172 | orchestrator | 2025-09-10 00:53:15 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:15.470620 | orchestrator | 2025-09-10 00:53:15 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:15.470645 | orchestrator | 2025-09-10 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:18.526726 | orchestrator | 2025-09-10 00:53:18 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:18.527589 | orchestrator | 2025-09-10 00:53:18 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:18.527882 | orchestrator | 2025-09-10 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:21.572165 | orchestrator | 2025-09-10 00:53:21 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:21.573260 | orchestrator | 2025-09-10 00:53:21 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:21.573545 | orchestrator | 2025-09-10 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:24.623256 | orchestrator | 2025-09-10 00:53:24 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:24.625177 | orchestrator | 2025-09-10 00:53:24 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:24.625210 | orchestrator | 2025-09-10 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:27.674672 | orchestrator | 2025-09-10 00:53:27 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:27.676010 | orchestrator | 2025-09-10 00:53:27 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:27.676042 | orchestrator | 2025-09-10 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:30.727958 | orchestrator | 2025-09-10 00:53:30 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:30.728706 | orchestrator | 2025-09-10 00:53:30 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:30.728739 | orchestrator | 2025-09-10 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:33.774725 | orchestrator | 2025-09-10 00:53:33 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:33.775300 | orchestrator | 2025-09-10 00:53:33 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:33.775331 | orchestrator | 2025-09-10 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:36.817273 | orchestrator | 2025-09-10 00:53:36 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:36.817788 | orchestrator | 2025-09-10 00:53:36 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:36.817818 | orchestrator | 2025-09-10 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:39.851792 | orchestrator | 2025-09-10 00:53:39 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:39.852333 | orchestrator | 2025-09-10 00:53:39 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:39.852521 | orchestrator | 2025-09-10 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:42.899388 | orchestrator | 2025-09-10 00:53:42 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:42.902127 | orchestrator | 2025-09-10 00:53:42 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:42.902196 | orchestrator | 2025-09-10 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:45.946415 | orchestrator | 2025-09-10 00:53:45 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state STARTED 2025-09-10 00:53:45.947426 | orchestrator | 2025-09-10 00:53:45 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:45.947457 | orchestrator | 2025-09-10 00:53:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:49.021559 | orchestrator | 2025-09-10 00:53:49 | INFO  | Task a1213808-8d6f-4369-92f9-e840781d332a is in state SUCCESS 2025-09-10 00:53:49.023703 | orchestrator | 2025-09-10 00:53:49.023744 | orchestrator | 2025-09-10 00:53:49.023757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:53:49.023769 | orchestrator | 2025-09-10 00:53:49.023780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:53:49.023792 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.276) 0:00:00.277 *** 2025-09-10 00:53:49.023803 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.023815 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.023826 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.023837 | orchestrator | 2025-09-10 00:53:49.023848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:53:49.023859 | orchestrator | Wednesday 10 September 2025 00:47:17 +0000 (0:00:00.331) 0:00:00.608 *** 2025-09-10 00:53:49.023871 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-10 00:53:49.023882 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-10 00:53:49.023892 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-10 00:53:49.023903 | orchestrator | 2025-09-10 00:53:49.023914 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-10 00:53:49.023925 | orchestrator | 2025-09-10 00:53:49.023936 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-10 00:53:49.023947 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.415) 0:00:01.023 *** 2025-09-10 00:53:49.023958 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.023969 | orchestrator | 2025-09-10 00:53:49.023980 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-10 00:53:49.023991 | orchestrator | Wednesday 10 September 2025 00:47:18 +0000 (0:00:00.576) 0:00:01.599 *** 2025-09-10 00:53:49.024002 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.024012 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.024023 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.024034 | orchestrator | 2025-09-10 00:53:49.024045 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-10 00:53:49.024056 | orchestrator | Wednesday 10 September 2025 00:47:19 +0000 (0:00:00.736) 0:00:02.336 *** 2025-09-10 00:53:49.024066 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.024077 | orchestrator | 2025-09-10 00:53:49.024088 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-10 00:53:49.024099 | orchestrator | Wednesday 10 September 2025 00:47:20 +0000 (0:00:00.805) 0:00:03.142 *** 2025-09-10 00:53:49.024110 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.024121 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.024132 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.024162 | orchestrator | 2025-09-10 00:53:49.024174 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-10 00:53:49.024185 | orchestrator | Wednesday 10 September 2025 00:47:20 +0000 (0:00:00.681) 0:00:03.823 *** 2025-09-10 00:53:49.024195 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-10 00:53:49.024206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-10 00:53:49.024217 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-10 00:53:49.024253 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-10 00:53:49.024264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-10 00:53:49.024274 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-10 00:53:49.024285 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-10 00:53:49.024296 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-10 00:53:49.024310 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-10 00:53:49.024323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-10 00:53:49.024337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-10 00:53:49.024349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-10 00:53:49.024362 | orchestrator | 2025-09-10 00:53:49.024375 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-10 00:53:49.024388 | orchestrator | Wednesday 10 September 2025 00:47:23 +0000 (0:00:02.606) 0:00:06.429 *** 2025-09-10 00:53:49.024400 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-10 00:53:49.024414 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-10 00:53:49.024426 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-10 00:53:49.024439 | orchestrator | 2025-09-10 00:53:49.024452 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-10 00:53:49.024465 | orchestrator | Wednesday 10 September 2025 00:47:24 +0000 (0:00:00.973) 0:00:07.403 *** 2025-09-10 00:53:49.024479 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-10 00:53:49.024492 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-10 00:53:49.024520 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-10 00:53:49.024533 | orchestrator | 2025-09-10 00:53:49.024779 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-10 00:53:49.024793 | orchestrator | Wednesday 10 September 2025 00:47:26 +0000 (0:00:01.530) 0:00:08.933 *** 2025-09-10 00:53:49.024804 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-10 00:53:49.024816 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.024841 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-10 00:53:49.024852 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.024863 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-10 00:53:49.024874 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.024884 | orchestrator | 2025-09-10 00:53:49.024895 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-10 00:53:49.024906 | orchestrator | Wednesday 10 September 2025 00:47:26 +0000 (0:00:00.491) 0:00:09.425 *** 2025-09-10 00:53:49.024920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.024939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.024962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.024973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.024984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.025065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.025086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.025098 | orchestrator | 2025-09-10 00:53:49.025109 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-10 00:53:49.025178 | orchestrator | Wednesday 10 September 2025 00:47:28 +0000 (0:00:02.306) 0:00:11.731 *** 2025-09-10 00:53:49.025190 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.025201 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.025212 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.025222 | orchestrator | 2025-09-10 00:53:49.025233 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-10 00:53:49.025244 | orchestrator | Wednesday 10 September 2025 00:47:30 +0000 (0:00:01.521) 0:00:13.253 *** 2025-09-10 00:53:49.025255 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-10 00:53:49.025266 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-10 00:53:49.025277 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-10 00:53:49.025287 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-10 00:53:49.025298 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-10 00:53:49.025309 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-10 00:53:49.025319 | orchestrator | 2025-09-10 00:53:49.025330 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-10 00:53:49.025341 | orchestrator | Wednesday 10 September 2025 00:47:32 +0000 (0:00:02.569) 0:00:15.822 *** 2025-09-10 00:53:49.025352 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.025363 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.025373 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.025384 | orchestrator | 2025-09-10 00:53:49.025394 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-10 00:53:49.025405 | orchestrator | Wednesday 10 September 2025 00:47:34 +0000 (0:00:01.864) 0:00:17.687 *** 2025-09-10 00:53:49.025416 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.025426 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.025437 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.025448 | orchestrator | 2025-09-10 00:53:49.025459 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-10 00:53:49.025469 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:01.682) 0:00:19.369 *** 2025-09-10 00:53:49.025481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.025508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.025528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.025542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-10 00:53:49.025553 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.025565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.025576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.025587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.025604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-10 00:53:49.025615 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.025635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.025653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.025665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.025676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-10 00:53:49.025687 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.025698 | orchestrator | 2025-09-10 00:53:49.025709 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-10 00:53:49.025719 | orchestrator | Wednesday 10 September 2025 00:47:37 +0000 (0:00:00.783) 0:00:20.152 *** 2025-09-10 00:53:49.025731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.025813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-10 00:53:49.025824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.025851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-10 00:53:49.025881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.025904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a', '__omit_place_holder__7a5c47052dd9856fcda2cf1ffb26f240f4a8821a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-10 00:53:49.025915 | orchestrator | 2025-09-10 00:53:49.025926 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-10 00:53:49.025937 | orchestrator | Wednesday 10 September 2025 00:47:42 +0000 (0:00:04.923) 0:00:25.076 *** 2025-09-10 00:53:49.025948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.025971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.026008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.026369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.026385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.026397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.026408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.026419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.026439 | orchestrator | 2025-09-10 00:53:49.026450 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-10 00:53:49.026462 | orchestrator | Wednesday 10 September 2025 00:47:45 +0000 (0:00:03.205) 0:00:28.281 *** 2025-09-10 00:53:49.026473 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-10 00:53:49.026484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-10 00:53:49.026494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-10 00:53:49.026505 | orchestrator | 2025-09-10 00:53:49.026516 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-10 00:53:49.026526 | orchestrator | Wednesday 10 September 2025 00:47:48 +0000 (0:00:03.276) 0:00:31.558 *** 2025-09-10 00:53:49.026543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-10 00:53:49.026555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-10 00:53:49.026565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-10 00:53:49.026576 | orchestrator | 2025-09-10 00:53:49.028055 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-10 00:53:49.028197 | orchestrator | Wednesday 10 September 2025 00:47:53 +0000 (0:00:05.236) 0:00:36.794 *** 2025-09-10 00:53:49.028226 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.028249 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.028267 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.028279 | orchestrator | 2025-09-10 00:53:49.028290 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-10 00:53:49.028301 | orchestrator | Wednesday 10 September 2025 00:47:54 +0000 (0:00:00.539) 0:00:37.333 *** 2025-09-10 00:53:49.028313 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-10 00:53:49.028325 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-10 00:53:49.028336 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-10 00:53:49.028347 | orchestrator | 2025-09-10 00:53:49.028357 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-10 00:53:49.028368 | orchestrator | Wednesday 10 September 2025 00:47:57 +0000 (0:00:02.692) 0:00:40.026 *** 2025-09-10 00:53:49.028380 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-10 00:53:49.028391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-10 00:53:49.028402 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-10 00:53:49.028413 | orchestrator | 2025-09-10 00:53:49.028423 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-10 00:53:49.028434 | orchestrator | Wednesday 10 September 2025 00:48:00 +0000 (0:00:03.578) 0:00:43.604 *** 2025-09-10 00:53:49.028445 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-10 00:53:49.028457 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-10 00:53:49.028468 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-10 00:53:49.028478 | orchestrator | 2025-09-10 00:53:49.028489 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-10 00:53:49.028500 | orchestrator | Wednesday 10 September 2025 00:48:02 +0000 (0:00:02.219) 0:00:45.824 *** 2025-09-10 00:53:49.028511 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-10 00:53:49.028545 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-10 00:53:49.028556 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-10 00:53:49.028567 | orchestrator | 2025-09-10 00:53:49.028580 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-10 00:53:49.028593 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:02.403) 0:00:48.227 *** 2025-09-10 00:53:49.028606 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.028619 | orchestrator | 2025-09-10 00:53:49.028632 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-10 00:53:49.028645 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.677) 0:00:48.904 *** 2025-09-10 00:53:49.028661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.028678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.028729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.028752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.028772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.028816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.028839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.028861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.028881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.028901 | orchestrator | 2025-09-10 00:53:49.028923 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-10 00:53:49.028935 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:04.090) 0:00:52.994 *** 2025-09-10 00:53:49.028959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.028972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.028983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029003 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.029015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029049 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.029066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029116 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.029127 | orchestrator | 2025-09-10 00:53:49.029139 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-10 00:53:49.029254 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:00.900) 0:00:53.895 *** 2025-09-10 00:53:49.029276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029330 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.029341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029409 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.029421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029454 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.029465 | orchestrator | 2025-09-10 00:53:49.029476 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-10 00:53:49.029487 | orchestrator | Wednesday 10 September 2025 00:48:12 +0000 (0:00:01.308) 0:00:55.203 *** 2025-09-10 00:53:49.029498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029548 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.029605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029668 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.029680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029736 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.029747 | orchestrator | 2025-09-10 00:53:49.029758 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-10 00:53:49.029769 | orchestrator | Wednesday 10 September 2025 00:48:13 +0000 (0:00:01.102) 0:00:56.306 *** 2025-09-10 00:53:49.029780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029814 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.029825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029872 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.029901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.029920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.029951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.029971 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.029991 | orchestrator | 2025-09-10 00:53:49.030002 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-10 00:53:49.030014 | orchestrator | Wednesday 10 September 2025 00:48:14 +0000 (0:00:00.902) 0:00:57.208 *** 2025-09-10 00:53:49.030137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030217 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.030245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030281 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.030292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030326 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.030337 | orchestrator | 2025-09-10 00:53:49.030348 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-10 00:53:49.030366 | orchestrator | Wednesday 10 September 2025 00:48:15 +0000 (0:00:01.491) 0:00:58.700 *** 2025-09-10 00:53:49.030383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030431 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.030451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030518 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.030538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030636 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.030656 | orchestrator | 2025-09-10 00:53:49.030676 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-10 00:53:49.030695 | orchestrator | Wednesday 10 September 2025 00:48:16 +0000 (0:00:01.004) 0:00:59.705 *** 2025-09-10 00:53:49.030714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030771 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.030791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030891 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.030910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.030930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.030949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.030969 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.030988 | orchestrator | 2025-09-10 00:53:49.031006 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-10 00:53:49.031025 | orchestrator | Wednesday 10 September 2025 00:48:17 +0000 (0:00:00.551) 0:01:00.256 *** 2025-09-10 00:53:49.031044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.031077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.031105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.031127 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.031186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.031210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.031229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.031250 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.031268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-10 00:53:49.031297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-10 00:53:49.031309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-10 00:53:49.031320 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.031331 | orchestrator | 2025-09-10 00:53:49.031342 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-10 00:53:49.031353 | orchestrator | Wednesday 10 September 2025 00:48:18 +0000 (0:00:00.841) 0:01:01.098 *** 2025-09-10 00:53:49.031369 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-10 00:53:49.031381 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-10 00:53:49.031400 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-10 00:53:49.031411 | orchestrator | 2025-09-10 00:53:49.031422 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-10 00:53:49.031432 | orchestrator | Wednesday 10 September 2025 00:48:20 +0000 (0:00:02.103) 0:01:03.201 *** 2025-09-10 00:53:49.031443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-10 00:53:49.031454 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-10 00:53:49.031464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-10 00:53:49.031475 | orchestrator | 2025-09-10 00:53:49.031485 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-10 00:53:49.031496 | orchestrator | Wednesday 10 September 2025 00:48:22 +0000 (0:00:01.960) 0:01:05.161 *** 2025-09-10 00:53:49.031506 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-10 00:53:49.031517 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-10 00:53:49.031528 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-10 00:53:49.031539 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.031550 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-10 00:53:49.031560 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.031571 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-10 00:53:49.031589 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-10 00:53:49.031599 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.031610 | orchestrator | 2025-09-10 00:53:49.031621 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-10 00:53:49.031631 | orchestrator | Wednesday 10 September 2025 00:48:23 +0000 (0:00:01.728) 0:01:06.889 *** 2025-09-10 00:53:49.031643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.031654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.031666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-10 00:53:49.031689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.031701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.031712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-10 00:53:49.031729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.031741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.031752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-10 00:53:49.031763 | orchestrator | 2025-09-10 00:53:49.031774 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-10 00:53:49.031785 | orchestrator | Wednesday 10 September 2025 00:48:26 +0000 (0:00:02.734) 0:01:09.624 *** 2025-09-10 00:53:49.031796 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.031807 | orchestrator | 2025-09-10 00:53:49.031818 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-10 00:53:49.031828 | orchestrator | Wednesday 10 September 2025 00:48:27 +0000 (0:00:00.840) 0:01:10.465 *** 2025-09-10 00:53:49.031846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-10 00:53:49.031865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.031877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.031895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.031906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-10 00:53:49.031917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.031928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.031951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.031964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-10 00:53:49.031982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.031993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032015 | orchestrator | 2025-09-10 00:53:49.032026 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-10 00:53:49.032037 | orchestrator | Wednesday 10 September 2025 00:48:33 +0000 (0:00:06.003) 0:01:16.468 *** 2025-09-10 00:53:49.032053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-10 00:53:49.032072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-10 00:53:49.032091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.032102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.032113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032168 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.032185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032196 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.032226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-10 00:53:49.032238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.032249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032272 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.032282 | orchestrator | 2025-09-10 00:53:49.032293 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-10 00:53:49.032304 | orchestrator | Wednesday 10 September 2025 00:48:34 +0000 (0:00:01.113) 0:01:17.582 *** 2025-09-10 00:53:49.032316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-10 00:53:49.032328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-10 00:53:49.032340 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.032351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-10 00:53:49.032362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-10 00:53:49.032373 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.032388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-10 00:53:49.032406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-10 00:53:49.032417 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.032428 | orchestrator | 2025-09-10 00:53:49.032444 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-10 00:53:49.032455 | orchestrator | Wednesday 10 September 2025 00:48:36 +0000 (0:00:02.308) 0:01:19.890 *** 2025-09-10 00:53:49.032466 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.032476 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.032487 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.032497 | orchestrator | 2025-09-10 00:53:49.032508 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-10 00:53:49.032519 | orchestrator | Wednesday 10 September 2025 00:48:38 +0000 (0:00:01.952) 0:01:21.842 *** 2025-09-10 00:53:49.032530 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.032540 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.032551 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.032562 | orchestrator | 2025-09-10 00:53:49.032572 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-10 00:53:49.032583 | orchestrator | Wednesday 10 September 2025 00:48:41 +0000 (0:00:02.350) 0:01:24.193 *** 2025-09-10 00:53:49.032594 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.032604 | orchestrator | 2025-09-10 00:53:49.032615 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-10 00:53:49.032626 | orchestrator | Wednesday 10 September 2025 00:48:41 +0000 (0:00:00.732) 0:01:24.926 *** 2025-09-10 00:53:49.032638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.032650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.032703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.032738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032768 | orchestrator | 2025-09-10 00:53:49.032779 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-10 00:53:49.032790 | orchestrator | Wednesday 10 September 2025 00:48:48 +0000 (0:00:06.131) 0:01:31.058 *** 2025-09-10 00:53:49.032812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.032825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032847 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.032859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.032877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032904 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.032923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.032935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.032958 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.032969 | orchestrator | 2025-09-10 00:53:49.032979 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-10 00:53:49.032990 | orchestrator | Wednesday 10 September 2025 00:48:48 +0000 (0:00:00.553) 0:01:31.611 *** 2025-09-10 00:53:49.033001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-10 00:53:49.033020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-10 00:53:49.033031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-10 00:53:49.033043 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.033054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-10 00:53:49.033065 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.033076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-10 00:53:49.033087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-10 00:53:49.033098 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.033109 | orchestrator | 2025-09-10 00:53:49.033119 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-10 00:53:49.033130 | orchestrator | Wednesday 10 September 2025 00:48:49 +0000 (0:00:00.933) 0:01:32.545 *** 2025-09-10 00:53:49.033203 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.033217 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.033233 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.033244 | orchestrator | 2025-09-10 00:53:49.033255 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-10 00:53:49.033266 | orchestrator | Wednesday 10 September 2025 00:48:50 +0000 (0:00:01.318) 0:01:33.864 *** 2025-09-10 00:53:49.033277 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.033288 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.033298 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.033309 | orchestrator | 2025-09-10 00:53:49.033326 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-10 00:53:49.033338 | orchestrator | Wednesday 10 September 2025 00:48:53 +0000 (0:00:02.212) 0:01:36.077 *** 2025-09-10 00:53:49.033349 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.033359 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.033370 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.033381 | orchestrator | 2025-09-10 00:53:49.033392 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-10 00:53:49.033403 | orchestrator | Wednesday 10 September 2025 00:48:53 +0000 (0:00:00.323) 0:01:36.401 *** 2025-09-10 00:53:49.033413 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.033424 | orchestrator | 2025-09-10 00:53:49.033435 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-10 00:53:49.033446 | orchestrator | Wednesday 10 September 2025 00:48:54 +0000 (0:00:00.913) 0:01:37.314 *** 2025-09-10 00:53:49.033457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-10 00:53:49.033477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-10 00:53:49.033489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-10 00:53:49.033500 | orchestrator | 2025-09-10 00:53:49.033511 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-10 00:53:49.033522 | orchestrator | Wednesday 10 September 2025 00:48:56 +0000 (0:00:02.546) 0:01:39.861 *** 2025-09-10 00:53:49.035646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-10 00:53:49.035696 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.035709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-10 00:53:49.035719 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.035742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-10 00:53:49.035753 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.035763 | orchestrator | 2025-09-10 00:53:49.035791 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-10 00:53:49.035803 | orchestrator | Wednesday 10 September 2025 00:48:58 +0000 (0:00:01.452) 0:01:41.313 *** 2025-09-10 00:53:49.035814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-10 00:53:49.035826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-10 00:53:49.035836 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.035846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-10 00:53:49.035857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-10 00:53:49.035870 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.035890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-10 00:53:49.035902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-10 00:53:49.035912 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.035927 | orchestrator | 2025-09-10 00:53:49.035937 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-10 00:53:49.035947 | orchestrator | Wednesday 10 September 2025 00:49:00 +0000 (0:00:01.687) 0:01:43.001 *** 2025-09-10 00:53:49.035956 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.035966 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.035976 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.035985 | orchestrator | 2025-09-10 00:53:49.035995 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-10 00:53:49.036005 | orchestrator | Wednesday 10 September 2025 00:49:00 +0000 (0:00:00.678) 0:01:43.679 *** 2025-09-10 00:53:49.036015 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.036024 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.036034 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.036043 | orchestrator | 2025-09-10 00:53:49.036053 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-10 00:53:49.036063 | orchestrator | Wednesday 10 September 2025 00:49:02 +0000 (0:00:01.271) 0:01:44.951 *** 2025-09-10 00:53:49.036072 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.036082 | orchestrator | 2025-09-10 00:53:49.036092 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-10 00:53:49.036102 | orchestrator | Wednesday 10 September 2025 00:49:02 +0000 (0:00:00.726) 0:01:45.677 *** 2025-09-10 00:53:49.036112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.036124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.036255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.036318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036353 | orchestrator | 2025-09-10 00:53:49.036365 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-10 00:53:49.036377 | orchestrator | Wednesday 10 September 2025 00:49:07 +0000 (0:00:04.260) 0:01:49.938 *** 2025-09-10 00:53:49.036388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.036404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036451 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.036463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.036475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036524 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.036536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.036547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.036589 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.036600 | orchestrator | 2025-09-10 00:53:49.036609 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-10 00:53:49.036619 | orchestrator | Wednesday 10 September 2025 00:49:08 +0000 (0:00:01.305) 0:01:51.244 *** 2025-09-10 00:53:49.036633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-10 00:53:49.036648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-10 00:53:49.036659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-10 00:53:49.036670 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.036680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-10 00:53:49.036690 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.036699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-10 00:53:49.036709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-10 00:53:49.036719 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.036728 | orchestrator | 2025-09-10 00:53:49.036738 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-10 00:53:49.036747 | orchestrator | Wednesday 10 September 2025 00:49:09 +0000 (0:00:01.325) 0:01:52.569 *** 2025-09-10 00:53:49.036755 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.036763 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.036771 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.036779 | orchestrator | 2025-09-10 00:53:49.036786 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-10 00:53:49.036794 | orchestrator | Wednesday 10 September 2025 00:49:11 +0000 (0:00:01.409) 0:01:53.978 *** 2025-09-10 00:53:49.036802 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.036810 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.036818 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.036826 | orchestrator | 2025-09-10 00:53:49.036833 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-10 00:53:49.036841 | orchestrator | Wednesday 10 September 2025 00:49:13 +0000 (0:00:02.231) 0:01:56.210 *** 2025-09-10 00:53:49.036849 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.036857 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.036865 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.036873 | orchestrator | 2025-09-10 00:53:49.036881 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-10 00:53:49.036889 | orchestrator | Wednesday 10 September 2025 00:49:13 +0000 (0:00:00.552) 0:01:56.762 *** 2025-09-10 00:53:49.036897 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.036904 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.036912 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.036927 | orchestrator | 2025-09-10 00:53:49.036935 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-10 00:53:49.036943 | orchestrator | Wednesday 10 September 2025 00:49:14 +0000 (0:00:00.604) 0:01:57.367 *** 2025-09-10 00:53:49.036951 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.036959 | orchestrator | 2025-09-10 00:53:49.036967 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-10 00:53:49.036975 | orchestrator | Wednesday 10 September 2025 00:49:15 +0000 (0:00:00.863) 0:01:58.231 *** 2025-09-10 00:53:49.036983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 00:53:49.036998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 00:53:49.037007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 00:53:49.037069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 00:53:49.037077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 00:53:49.037138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 00:53:49.037160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037207 | orchestrator | 2025-09-10 00:53:49.037218 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-10 00:53:49.037227 | orchestrator | Wednesday 10 September 2025 00:49:19 +0000 (0:00:04.156) 0:02:02.388 *** 2025-09-10 00:53:49.037240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 00:53:49.037249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 00:53:49.037261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037310 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.037318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 00:53:49.037332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 00:53:49.037340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037388 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.037396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 00:53:49.037408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 00:53:49.037416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.037470 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.037478 | orchestrator | 2025-09-10 00:53:49.037486 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-10 00:53:49.037494 | orchestrator | Wednesday 10 September 2025 00:49:20 +0000 (0:00:00.911) 0:02:03.299 *** 2025-09-10 00:53:49.037503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-10 00:53:49.037511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-10 00:53:49.037519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-10 00:53:49.037527 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.037535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-10 00:53:49.037543 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.037551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-10 00:53:49.037559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-10 00:53:49.037567 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.037575 | orchestrator | 2025-09-10 00:53:49.037582 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-10 00:53:49.037591 | orchestrator | Wednesday 10 September 2025 00:49:21 +0000 (0:00:00.987) 0:02:04.287 *** 2025-09-10 00:53:49.037599 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.037606 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.037614 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.037622 | orchestrator | 2025-09-10 00:53:49.037630 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-10 00:53:49.037638 | orchestrator | Wednesday 10 September 2025 00:49:22 +0000 (0:00:01.268) 0:02:05.555 *** 2025-09-10 00:53:49.037646 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.037653 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.037661 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.037669 | orchestrator | 2025-09-10 00:53:49.037677 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-10 00:53:49.037685 | orchestrator | Wednesday 10 September 2025 00:49:24 +0000 (0:00:02.115) 0:02:07.671 *** 2025-09-10 00:53:49.037693 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.037700 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.037708 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.037716 | orchestrator | 2025-09-10 00:53:49.037724 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-10 00:53:49.037732 | orchestrator | Wednesday 10 September 2025 00:49:25 +0000 (0:00:00.518) 0:02:08.189 *** 2025-09-10 00:53:49.037739 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.037747 | orchestrator | 2025-09-10 00:53:49.037755 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-10 00:53:49.037770 | orchestrator | Wednesday 10 September 2025 00:49:26 +0000 (0:00:00.776) 0:02:08.966 *** 2025-09-10 00:53:49.037789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 00:53:49.037800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.037818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 00:53:49.037832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.037958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 00:53:49.037979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.037988 | orchestrator | 2025-09-10 00:53:49.037996 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-10 00:53:49.038004 | orchestrator | Wednesday 10 September 2025 00:49:30 +0000 (0:00:04.141) 0:02:13.108 *** 2025-09-10 00:53:49.038058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 00:53:49.038078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.038087 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.038096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 00:53:49.038120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.038129 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.038138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 00:53:49.038177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.038187 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.038195 | orchestrator | 2025-09-10 00:53:49.038203 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-10 00:53:49.038211 | orchestrator | Wednesday 10 September 2025 00:49:33 +0000 (0:00:03.386) 0:02:16.494 *** 2025-09-10 00:53:49.038220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-10 00:53:49.038228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-10 00:53:49.038237 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.038245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-10 00:53:49.038254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-10 00:53:49.038267 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.038278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-10 00:53:49.038292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-10 00:53:49.038300 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.038308 | orchestrator | 2025-09-10 00:53:49.038316 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-10 00:53:49.038324 | orchestrator | Wednesday 10 September 2025 00:49:36 +0000 (0:00:03.400) 0:02:19.895 *** 2025-09-10 00:53:49.038332 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.038339 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.038347 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.038355 | orchestrator | 2025-09-10 00:53:49.038363 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-10 00:53:49.038371 | orchestrator | Wednesday 10 September 2025 00:49:38 +0000 (0:00:01.324) 0:02:21.219 *** 2025-09-10 00:53:49.038379 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.038387 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.038394 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.038402 | orchestrator | 2025-09-10 00:53:49.038410 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-10 00:53:49.038418 | orchestrator | Wednesday 10 September 2025 00:49:40 +0000 (0:00:02.095) 0:02:23.315 *** 2025-09-10 00:53:49.038426 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.038434 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.038442 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.038449 | orchestrator | 2025-09-10 00:53:49.038457 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-10 00:53:49.038465 | orchestrator | Wednesday 10 September 2025 00:49:40 +0000 (0:00:00.563) 0:02:23.878 *** 2025-09-10 00:53:49.038473 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.038505 | orchestrator | 2025-09-10 00:53:49.038513 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-10 00:53:49.038521 | orchestrator | Wednesday 10 September 2025 00:49:41 +0000 (0:00:00.850) 0:02:24.728 *** 2025-09-10 00:53:49.038530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 00:53:49.038544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 00:53:49.038553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 00:53:49.038563 | orchestrator | 2025-09-10 00:53:49.038572 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-10 00:53:49.038582 | orchestrator | Wednesday 10 September 2025 00:49:45 +0000 (0:00:03.320) 0:02:28.049 *** 2025-09-10 00:53:49.038603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 00:53:49.038613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 00:53:49.038623 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.038631 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.038641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 00:53:49.038657 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.038667 | orchestrator | 2025-09-10 00:53:49.038676 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-10 00:53:49.038686 | orchestrator | Wednesday 10 September 2025 00:49:45 +0000 (0:00:00.751) 0:02:28.801 *** 2025-09-10 00:53:49.038695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-10 00:53:49.038705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-10 00:53:49.038714 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.038723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-10 00:53:49.038733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-10 00:53:49.038742 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.038751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-10 00:53:49.038761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-10 00:53:49.038770 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.038779 | orchestrator | 2025-09-10 00:53:49.038788 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-10 00:53:49.038798 | orchestrator | Wednesday 10 September 2025 00:49:46 +0000 (0:00:00.686) 0:02:29.487 *** 2025-09-10 00:53:49.038807 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.038817 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.038826 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.038835 | orchestrator | 2025-09-10 00:53:49.038844 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-10 00:53:49.038854 | orchestrator | Wednesday 10 September 2025 00:49:47 +0000 (0:00:01.254) 0:02:30.742 *** 2025-09-10 00:53:49.038863 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.038872 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.038881 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.038890 | orchestrator | 2025-09-10 00:53:49.038899 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-10 00:53:49.038908 | orchestrator | Wednesday 10 September 2025 00:49:49 +0000 (0:00:02.025) 0:02:32.767 *** 2025-09-10 00:53:49.038918 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.038927 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.038940 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.038948 | orchestrator | 2025-09-10 00:53:49.038956 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-10 00:53:49.038964 | orchestrator | Wednesday 10 September 2025 00:49:50 +0000 (0:00:00.597) 0:02:33.365 *** 2025-09-10 00:53:49.038985 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.038993 | orchestrator | 2025-09-10 00:53:49.039001 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-10 00:53:49.039009 | orchestrator | Wednesday 10 September 2025 00:49:51 +0000 (0:00:00.891) 0:02:34.256 *** 2025-09-10 00:53:49.039018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:53:49.039042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:53:49.039057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:53:49.039066 | orchestrator | 2025-09-10 00:53:49.039074 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-10 00:53:49.039082 | orchestrator | Wednesday 10 September 2025 00:49:55 +0000 (0:00:04.509) 0:02:38.766 *** 2025-09-10 00:53:49.039099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:53:49.039114 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.039123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:53:49.039132 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.039192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:53:49.039209 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.039217 | orchestrator | 2025-09-10 00:53:49.039225 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-10 00:53:49.039233 | orchestrator | Wednesday 10 September 2025 00:49:57 +0000 (0:00:01.224) 0:02:39.990 *** 2025-09-10 00:53:49.039241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-10 00:53:49.039250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-10 00:53:49.039259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-10 00:53:49.039267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-10 00:53:49.039275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-10 00:53:49.039283 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.039291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-10 00:53:49.039299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-10 00:53:49.039311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-10 00:53:49.039328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-10 00:53:49.039336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-10 00:53:49.039344 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.039352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-10 00:53:49.039360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-10 00:53:49.039368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-10 00:53:49.039376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-10 00:53:49.039384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-10 00:53:49.039392 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.039400 | orchestrator | 2025-09-10 00:53:49.039408 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-10 00:53:49.039416 | orchestrator | Wednesday 10 September 2025 00:49:58 +0000 (0:00:00.946) 0:02:40.937 *** 2025-09-10 00:53:49.039424 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.039431 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.039439 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.039447 | orchestrator | 2025-09-10 00:53:49.039455 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-10 00:53:49.039463 | orchestrator | Wednesday 10 September 2025 00:49:59 +0000 (0:00:01.485) 0:02:42.422 *** 2025-09-10 00:53:49.039471 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.039479 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.039486 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.039494 | orchestrator | 2025-09-10 00:53:49.039502 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-10 00:53:49.039510 | orchestrator | Wednesday 10 September 2025 00:50:01 +0000 (0:00:02.066) 0:02:44.489 *** 2025-09-10 00:53:49.039518 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.039526 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.039534 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.039541 | orchestrator | 2025-09-10 00:53:49.039549 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-10 00:53:49.039557 | orchestrator | Wednesday 10 September 2025 00:50:01 +0000 (0:00:00.322) 0:02:44.811 *** 2025-09-10 00:53:49.039565 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.039573 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.039586 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.039594 | orchestrator | 2025-09-10 00:53:49.039602 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-10 00:53:49.039610 | orchestrator | Wednesday 10 September 2025 00:50:02 +0000 (0:00:00.527) 0:02:45.339 *** 2025-09-10 00:53:49.039618 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.039625 | orchestrator | 2025-09-10 00:53:49.039633 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-10 00:53:49.039641 | orchestrator | Wednesday 10 September 2025 00:50:03 +0000 (0:00:00.951) 0:02:46.290 *** 2025-09-10 00:53:49.039657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 00:53:49.039667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 00:53:49.039676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 00:53:49.039685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 00:53:49.039699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 00:53:49.039717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 00:53:49.039724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 00:53:49.039731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 00:53:49.039738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 00:53:49.039746 | orchestrator | 2025-09-10 00:53:49.039752 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-10 00:53:49.039759 | orchestrator | Wednesday 10 September 2025 00:50:06 +0000 (0:00:03.619) 0:02:49.909 *** 2025-09-10 00:53:49.039767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 00:53:49.039781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 00:53:49.039792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 00:53:49.039799 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.039806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 00:53:49.039814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 00:53:49.039821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 00:53:49.039832 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.039840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 00:53:49.039853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 00:53:49.039860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 00:53:49.039867 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.039874 | orchestrator | 2025-09-10 00:53:49.039881 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-10 00:53:49.039888 | orchestrator | Wednesday 10 September 2025 00:50:07 +0000 (0:00:00.872) 0:02:50.782 *** 2025-09-10 00:53:49.039895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-10 00:53:49.039902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-10 00:53:49.039909 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.039916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-10 00:53:49.039923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-10 00:53:49.039934 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.039941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-10 00:53:49.039948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-10 00:53:49.039955 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.039962 | orchestrator | 2025-09-10 00:53:49.039968 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-10 00:53:49.039975 | orchestrator | Wednesday 10 September 2025 00:50:08 +0000 (0:00:00.890) 0:02:51.673 *** 2025-09-10 00:53:49.039982 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.039988 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.039995 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.040002 | orchestrator | 2025-09-10 00:53:49.040008 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-10 00:53:49.040015 | orchestrator | Wednesday 10 September 2025 00:50:10 +0000 (0:00:01.444) 0:02:53.117 *** 2025-09-10 00:53:49.040022 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.040028 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.040035 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.040041 | orchestrator | 2025-09-10 00:53:49.040048 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-10 00:53:49.040055 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:02.077) 0:02:55.194 *** 2025-09-10 00:53:49.040061 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.040068 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.040074 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.040081 | orchestrator | 2025-09-10 00:53:49.040087 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-10 00:53:49.040094 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:00.560) 0:02:55.755 *** 2025-09-10 00:53:49.040103 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.040110 | orchestrator | 2025-09-10 00:53:49.040117 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-10 00:53:49.040124 | orchestrator | Wednesday 10 September 2025 00:50:13 +0000 (0:00:01.013) 0:02:56.769 *** 2025-09-10 00:53:49.040134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 00:53:49.040154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 00:53:49.040175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 00:53:49.040282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040293 | orchestrator | 2025-09-10 00:53:49.040300 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-10 00:53:49.040307 | orchestrator | Wednesday 10 September 2025 00:50:18 +0000 (0:00:04.469) 0:03:01.239 *** 2025-09-10 00:53:49.040319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 00:53:49.040326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040333 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.040340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 00:53:49.040354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 00:53:49.040362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040373 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.040380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040387 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.040394 | orchestrator | 2025-09-10 00:53:49.040400 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-10 00:53:49.040407 | orchestrator | Wednesday 10 September 2025 00:50:19 +0000 (0:00:01.250) 0:03:02.489 *** 2025-09-10 00:53:49.040414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-10 00:53:49.040421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-10 00:53:49.040428 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.040434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-10 00:53:49.040441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-10 00:53:49.040448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-10 00:53:49.040455 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.040461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-10 00:53:49.040468 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.040474 | orchestrator | 2025-09-10 00:53:49.040481 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-10 00:53:49.040488 | orchestrator | Wednesday 10 September 2025 00:50:21 +0000 (0:00:01.567) 0:03:04.057 *** 2025-09-10 00:53:49.040494 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.040501 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.040508 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.040514 | orchestrator | 2025-09-10 00:53:49.040521 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-10 00:53:49.040527 | orchestrator | Wednesday 10 September 2025 00:50:22 +0000 (0:00:01.368) 0:03:05.425 *** 2025-09-10 00:53:49.040534 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.040543 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.040550 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.040556 | orchestrator | 2025-09-10 00:53:49.040563 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-10 00:53:49.040576 | orchestrator | Wednesday 10 September 2025 00:50:24 +0000 (0:00:01.976) 0:03:07.402 *** 2025-09-10 00:53:49.040586 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.040593 | orchestrator | 2025-09-10 00:53:49.040600 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-10 00:53:49.040606 | orchestrator | Wednesday 10 September 2025 00:50:25 +0000 (0:00:01.221) 0:03:08.623 *** 2025-09-10 00:53:49.040613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-10 00:53:49.040621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-10 00:53:49.040661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-10 00:53:49.040690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040720 | orchestrator | 2025-09-10 00:53:49.040727 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-10 00:53:49.040734 | orchestrator | Wednesday 10 September 2025 00:50:29 +0000 (0:00:03.714) 0:03:12.338 *** 2025-09-10 00:53:49.040741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-10 00:53:49.040748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040768 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.040775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-10 00:53:49.040792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040813 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.040820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-10 00:53:49.040827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.040857 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.040864 | orchestrator | 2025-09-10 00:53:49.040871 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-10 00:53:49.040878 | orchestrator | Wednesday 10 September 2025 00:50:30 +0000 (0:00:00.665) 0:03:13.003 *** 2025-09-10 00:53:49.040885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-10 00:53:49.040892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-10 00:53:49.040899 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.040907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-10 00:53:49.040915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-10 00:53:49.040922 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.040930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-10 00:53:49.040938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-10 00:53:49.040946 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.040954 | orchestrator | 2025-09-10 00:53:49.040962 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-10 00:53:49.040970 | orchestrator | Wednesday 10 September 2025 00:50:31 +0000 (0:00:01.503) 0:03:14.507 *** 2025-09-10 00:53:49.040978 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.040985 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.040993 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.041000 | orchestrator | 2025-09-10 00:53:49.041008 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-10 00:53:49.041016 | orchestrator | Wednesday 10 September 2025 00:50:32 +0000 (0:00:01.361) 0:03:15.868 *** 2025-09-10 00:53:49.041024 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.041032 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.041040 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.041050 | orchestrator | 2025-09-10 00:53:49.041058 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-10 00:53:49.041066 | orchestrator | Wednesday 10 September 2025 00:50:35 +0000 (0:00:02.102) 0:03:17.970 *** 2025-09-10 00:53:49.041074 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.041082 | orchestrator | 2025-09-10 00:53:49.041090 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-10 00:53:49.041098 | orchestrator | Wednesday 10 September 2025 00:50:36 +0000 (0:00:01.288) 0:03:19.259 *** 2025-09-10 00:53:49.041106 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-10 00:53:49.041114 | orchestrator | 2025-09-10 00:53:49.041121 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-10 00:53:49.041129 | orchestrator | Wednesday 10 September 2025 00:50:39 +0000 (0:00:02.858) 0:03:22.117 *** 2025-09-10 00:53:49.041157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:53:49.041168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-10 00:53:49.041176 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:53:49.041197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-10 00:53:49.041206 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:53:49.041232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-10 00:53:49.041245 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041252 | orchestrator | 2025-09-10 00:53:49.041260 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-10 00:53:49.041266 | orchestrator | Wednesday 10 September 2025 00:50:41 +0000 (0:00:02.711) 0:03:24.829 *** 2025-09-10 00:53:49.041276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:53:49.041287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-10 00:53:49.041295 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:53:49.041314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-10 00:53:49.041321 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:53:49.041343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-10 00:53:49.041356 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041363 | orchestrator | 2025-09-10 00:53:49.041369 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-10 00:53:49.041376 | orchestrator | Wednesday 10 September 2025 00:50:44 +0000 (0:00:02.382) 0:03:27.212 *** 2025-09-10 00:53:49.041383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-10 00:53:49.041390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-10 00:53:49.041397 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-10 00:53:49.041414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-10 00:53:49.041421 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-10 00:53:49.041439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-10 00:53:49.041450 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041456 | orchestrator | 2025-09-10 00:53:49.041463 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-10 00:53:49.041470 | orchestrator | Wednesday 10 September 2025 00:50:47 +0000 (0:00:02.963) 0:03:30.176 *** 2025-09-10 00:53:49.041477 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.041483 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.041490 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.041496 | orchestrator | 2025-09-10 00:53:49.041503 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-10 00:53:49.041510 | orchestrator | Wednesday 10 September 2025 00:50:49 +0000 (0:00:01.856) 0:03:32.032 *** 2025-09-10 00:53:49.041516 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041523 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041529 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041536 | orchestrator | 2025-09-10 00:53:49.041542 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-10 00:53:49.041549 | orchestrator | Wednesday 10 September 2025 00:50:50 +0000 (0:00:01.525) 0:03:33.557 *** 2025-09-10 00:53:49.041555 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041562 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041568 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041575 | orchestrator | 2025-09-10 00:53:49.041582 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-10 00:53:49.041588 | orchestrator | Wednesday 10 September 2025 00:50:50 +0000 (0:00:00.330) 0:03:33.888 *** 2025-09-10 00:53:49.041595 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.041602 | orchestrator | 2025-09-10 00:53:49.041608 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-10 00:53:49.041615 | orchestrator | Wednesday 10 September 2025 00:50:52 +0000 (0:00:01.448) 0:03:35.336 *** 2025-09-10 00:53:49.041622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-10 00:53:49.041632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-10 00:53:49.041643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-10 00:53:49.041654 | orchestrator | 2025-09-10 00:53:49.041660 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-10 00:53:49.041667 | orchestrator | Wednesday 10 September 2025 00:50:53 +0000 (0:00:01.476) 0:03:36.813 *** 2025-09-10 00:53:49.041674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-10 00:53:49.041681 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-10 00:53:49.041695 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-10 00:53:49.041709 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041715 | orchestrator | 2025-09-10 00:53:49.041722 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-10 00:53:49.041729 | orchestrator | Wednesday 10 September 2025 00:50:54 +0000 (0:00:00.382) 0:03:37.195 *** 2025-09-10 00:53:49.041735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-10 00:53:49.041743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-10 00:53:49.041753 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041763 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-10 00:53:49.041780 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041786 | orchestrator | 2025-09-10 00:53:49.041793 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-10 00:53:49.041799 | orchestrator | Wednesday 10 September 2025 00:50:54 +0000 (0:00:00.582) 0:03:37.778 *** 2025-09-10 00:53:49.041806 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041813 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041819 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041825 | orchestrator | 2025-09-10 00:53:49.041832 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-10 00:53:49.041839 | orchestrator | Wednesday 10 September 2025 00:50:55 +0000 (0:00:00.731) 0:03:38.510 *** 2025-09-10 00:53:49.041845 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041852 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041858 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041865 | orchestrator | 2025-09-10 00:53:49.041871 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-10 00:53:49.041878 | orchestrator | Wednesday 10 September 2025 00:50:56 +0000 (0:00:01.277) 0:03:39.788 *** 2025-09-10 00:53:49.041884 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.041891 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.041897 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.041904 | orchestrator | 2025-09-10 00:53:49.041910 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-10 00:53:49.041917 | orchestrator | Wednesday 10 September 2025 00:50:57 +0000 (0:00:00.305) 0:03:40.093 *** 2025-09-10 00:53:49.041924 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.041930 | orchestrator | 2025-09-10 00:53:49.041937 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-10 00:53:49.041944 | orchestrator | Wednesday 10 September 2025 00:50:58 +0000 (0:00:01.418) 0:03:41.511 *** 2025-09-10 00:53:49.041950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 00:53:49.041958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.041968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.041979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.041986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 00:53:49.042004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-10 00:53:49.042012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-10 00:53:49.042109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.042398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.042461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 00:53:49.042485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-10 00:53:49.042517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.042605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042611 | orchestrator | 2025-09-10 00:53:49.042617 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-10 00:53:49.042624 | orchestrator | Wednesday 10 September 2025 00:51:02 +0000 (0:00:04.227) 0:03:45.739 *** 2025-09-10 00:53:49.042630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 00:53:49.042641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-10 00:53:49.042692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 00:53:49.042734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-10 00:53:49.042809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 00:53:49.042821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.042832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042864 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.042870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-10 00:53:49.042914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.042976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.042983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.042996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.043024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-10 00:53:49.043030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.043036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-10 00:53:49.043043 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.043049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-10 00:53:49.043074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-10 00:53:49.043081 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.043087 | orchestrator | 2025-09-10 00:53:49.043093 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-10 00:53:49.043099 | orchestrator | Wednesday 10 September 2025 00:51:04 +0000 (0:00:01.463) 0:03:47.202 *** 2025-09-10 00:53:49.043105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-10 00:53:49.043112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-10 00:53:49.043118 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.043124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-10 00:53:49.043130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-10 00:53:49.043136 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.043156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-10 00:53:49.043163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-10 00:53:49.043169 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.043175 | orchestrator | 2025-09-10 00:53:49.043181 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-10 00:53:49.043187 | orchestrator | Wednesday 10 September 2025 00:51:06 +0000 (0:00:02.105) 0:03:49.308 *** 2025-09-10 00:53:49.043194 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.043200 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.043206 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.043212 | orchestrator | 2025-09-10 00:53:49.043218 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-10 00:53:49.043224 | orchestrator | Wednesday 10 September 2025 00:51:07 +0000 (0:00:01.263) 0:03:50.571 *** 2025-09-10 00:53:49.043230 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.043237 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.043243 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.043249 | orchestrator | 2025-09-10 00:53:49.043255 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-10 00:53:49.043261 | orchestrator | Wednesday 10 September 2025 00:51:09 +0000 (0:00:02.015) 0:03:52.586 *** 2025-09-10 00:53:49.043272 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.043278 | orchestrator | 2025-09-10 00:53:49.043284 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-10 00:53:49.043290 | orchestrator | Wednesday 10 September 2025 00:51:10 +0000 (0:00:01.246) 0:03:53.833 *** 2025-09-10 00:53:49.043304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.043312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.043318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.043325 | orchestrator | 2025-09-10 00:53:49.043331 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-10 00:53:49.043337 | orchestrator | Wednesday 10 September 2025 00:51:14 +0000 (0:00:03.544) 0:03:57.378 *** 2025-09-10 00:53:49.043343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.043354 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.043367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.043373 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.043380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.043386 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.043392 | orchestrator | 2025-09-10 00:53:49.043398 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-10 00:53:49.043405 | orchestrator | Wednesday 10 September 2025 00:51:14 +0000 (0:00:00.514) 0:03:57.892 *** 2025-09-10 00:53:49.043411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043424 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.043430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043442 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.043449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043465 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.043471 | orchestrator | 2025-09-10 00:53:49.043477 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-10 00:53:49.043483 | orchestrator | Wednesday 10 September 2025 00:51:15 +0000 (0:00:00.763) 0:03:58.655 *** 2025-09-10 00:53:49.043489 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.043496 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.043502 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.043508 | orchestrator | 2025-09-10 00:53:49.043514 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-10 00:53:49.043520 | orchestrator | Wednesday 10 September 2025 00:51:17 +0000 (0:00:01.352) 0:04:00.007 *** 2025-09-10 00:53:49.043526 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.043532 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.043538 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.043544 | orchestrator | 2025-09-10 00:53:49.043550 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-10 00:53:49.043556 | orchestrator | Wednesday 10 September 2025 00:51:19 +0000 (0:00:02.104) 0:04:02.112 *** 2025-09-10 00:53:49.043562 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.043569 | orchestrator | 2025-09-10 00:53:49.043575 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-10 00:53:49.043581 | orchestrator | Wednesday 10 September 2025 00:51:20 +0000 (0:00:01.635) 0:04:03.747 *** 2025-09-10 00:53:49.043696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.043708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.043737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.043776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043793 | orchestrator | 2025-09-10 00:53:49.043799 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-10 00:53:49.043805 | orchestrator | Wednesday 10 September 2025 00:51:25 +0000 (0:00:04.361) 0:04:08.108 *** 2025-09-10 00:53:49.043832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.043840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043853 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.043860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.043872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043885 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.043912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.043920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.043937 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.043944 | orchestrator | 2025-09-10 00:53:49.043950 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-10 00:53:49.043956 | orchestrator | Wednesday 10 September 2025 00:51:26 +0000 (0:00:01.226) 0:04:09.335 *** 2025-09-10 00:53:49.043963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-10 00:53:49.043989 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.043995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044040 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-10 00:53:49.044077 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044083 | orchestrator | 2025-09-10 00:53:49.044089 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-10 00:53:49.044095 | orchestrator | Wednesday 10 September 2025 00:51:27 +0000 (0:00:00.885) 0:04:10.220 *** 2025-09-10 00:53:49.044102 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.044108 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.044114 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.044120 | orchestrator | 2025-09-10 00:53:49.044126 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-10 00:53:49.044133 | orchestrator | Wednesday 10 September 2025 00:51:28 +0000 (0:00:01.405) 0:04:11.625 *** 2025-09-10 00:53:49.044139 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.044181 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.044188 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.044194 | orchestrator | 2025-09-10 00:53:49.044200 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-10 00:53:49.044206 | orchestrator | Wednesday 10 September 2025 00:51:30 +0000 (0:00:02.129) 0:04:13.755 *** 2025-09-10 00:53:49.044212 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.044218 | orchestrator | 2025-09-10 00:53:49.044224 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-10 00:53:49.044230 | orchestrator | Wednesday 10 September 2025 00:51:32 +0000 (0:00:01.575) 0:04:15.330 *** 2025-09-10 00:53:49.044237 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-10 00:53:49.044243 | orchestrator | 2025-09-10 00:53:49.044249 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-10 00:53:49.044255 | orchestrator | Wednesday 10 September 2025 00:51:33 +0000 (0:00:00.795) 0:04:16.125 *** 2025-09-10 00:53:49.044262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-10 00:53:49.044269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})[0m 2025-09-10 00:53:49.044275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-10 00:53:49.044281 | orchestrator | 2025-09-10 00:53:49.044291 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-10 00:53:49.044299 | orchestrator | Wednesday 10 September 2025 00:51:38 +0000 (0:00:04.875) 0:04:21.000 *** 2025-09-10 00:53:49.044326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044340 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044355 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044370 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044377 | orchestrator | 2025-09-10 00:53:49.044385 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-10 00:53:49.044392 | orchestrator | Wednesday 10 September 2025 00:51:39 +0000 (0:00:01.065) 0:04:22.066 *** 2025-09-10 00:53:49.044400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-10 00:53:49.044407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-10 00:53:49.044415 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-10 00:53:49.044430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-10 00:53:49.044437 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-10 00:53:49.044452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-10 00:53:49.044459 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044465 | orchestrator | 2025-09-10 00:53:49.044471 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-10 00:53:49.044478 | orchestrator | Wednesday 10 September 2025 00:51:40 +0000 (0:00:01.618) 0:04:23.684 *** 2025-09-10 00:53:49.044488 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.044494 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.044501 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.044507 | orchestrator | 2025-09-10 00:53:49.044513 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-10 00:53:49.044520 | orchestrator | Wednesday 10 September 2025 00:51:43 +0000 (0:00:02.604) 0:04:26.289 *** 2025-09-10 00:53:49.044526 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.044532 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.044542 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.044548 | orchestrator | 2025-09-10 00:53:49.044555 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-10 00:53:49.044561 | orchestrator | Wednesday 10 September 2025 00:51:46 +0000 (0:00:03.103) 0:04:29.392 *** 2025-09-10 00:53:49.044582 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-10 00:53:49.044589 | orchestrator | 2025-09-10 00:53:49.044596 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-10 00:53:49.044602 | orchestrator | Wednesday 10 September 2025 00:51:47 +0000 (0:00:01.442) 0:04:30.835 *** 2025-09-10 00:53:49.044609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044616 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044629 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044642 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044649 | orchestrator | 2025-09-10 00:53:49.044655 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-10 00:53:49.044661 | orchestrator | Wednesday 10 September 2025 00:51:49 +0000 (0:00:01.269) 0:04:32.105 *** 2025-09-10 00:53:49.044666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044676 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044687 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-10 00:53:49.044703 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044709 | orchestrator | 2025-09-10 00:53:49.044714 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-10 00:53:49.044720 | orchestrator | Wednesday 10 September 2025 00:51:50 +0000 (0:00:01.397) 0:04:33.502 *** 2025-09-10 00:53:49.044725 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044730 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044736 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044741 | orchestrator | 2025-09-10 00:53:49.044761 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-10 00:53:49.044767 | orchestrator | Wednesday 10 September 2025 00:51:52 +0000 (0:00:02.039) 0:04:35.542 *** 2025-09-10 00:53:49.044772 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.044778 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.044783 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.044789 | orchestrator | 2025-09-10 00:53:49.044794 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-10 00:53:49.044800 | orchestrator | Wednesday 10 September 2025 00:51:55 +0000 (0:00:02.480) 0:04:38.022 *** 2025-09-10 00:53:49.044805 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.044810 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.044815 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.044821 | orchestrator | 2025-09-10 00:53:49.044826 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-10 00:53:49.044832 | orchestrator | Wednesday 10 September 2025 00:51:58 +0000 (0:00:03.062) 0:04:41.085 *** 2025-09-10 00:53:49.044837 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-10 00:53:49.044842 | orchestrator | 2025-09-10 00:53:49.044848 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-10 00:53:49.044853 | orchestrator | Wednesday 10 September 2025 00:51:58 +0000 (0:00:00.801) 0:04:41.886 *** 2025-09-10 00:53:49.044859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-10 00:53:49.044865 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-10 00:53:49.044881 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-10 00:53:49.044892 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044898 | orchestrator | 2025-09-10 00:53:49.044903 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-10 00:53:49.044908 | orchestrator | Wednesday 10 September 2025 00:52:00 +0000 (0:00:01.388) 0:04:43.274 *** 2025-09-10 00:53:49.044914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-10 00:53:49.044920 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-10 00:53:49.044934 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-10 00:53:49.044961 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044966 | orchestrator | 2025-09-10 00:53:49.044972 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-10 00:53:49.044977 | orchestrator | Wednesday 10 September 2025 00:52:01 +0000 (0:00:01.422) 0:04:44.697 *** 2025-09-10 00:53:49.044983 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.044988 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.044994 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.044999 | orchestrator | 2025-09-10 00:53:49.045004 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-10 00:53:49.045010 | orchestrator | Wednesday 10 September 2025 00:52:03 +0000 (0:00:01.561) 0:04:46.259 *** 2025-09-10 00:53:49.045015 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.045020 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.045031 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.045036 | orchestrator | 2025-09-10 00:53:49.045042 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-10 00:53:49.045047 | orchestrator | Wednesday 10 September 2025 00:52:05 +0000 (0:00:02.410) 0:04:48.670 *** 2025-09-10 00:53:49.045052 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.045058 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.045063 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.045068 | orchestrator | 2025-09-10 00:53:49.045074 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-10 00:53:49.045079 | orchestrator | Wednesday 10 September 2025 00:52:09 +0000 (0:00:03.402) 0:04:52.072 *** 2025-09-10 00:53:49.045084 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.045090 | orchestrator | 2025-09-10 00:53:49.045095 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-10 00:53:49.045101 | orchestrator | Wednesday 10 September 2025 00:52:10 +0000 (0:00:01.601) 0:04:53.673 *** 2025-09-10 00:53:49.045106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.045112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 00:53:49.045118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.045163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 00:53:49.045175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.045180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.045229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.045239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 00:53:49.045245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.045261 | orchestrator | 2025-09-10 00:53:49.045267 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-10 00:53:49.045272 | orchestrator | Wednesday 10 September 2025 00:52:14 +0000 (0:00:03.378) 0:04:57.052 *** 2025-09-10 00:53:49.045295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.045307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 00:53:49.045313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.045329 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.045335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.045358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 00:53:49.045371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.045388 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.045393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.045399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 00:53:49.045407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 00:53:49.045439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 00:53:49.045445 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.045450 | orchestrator | 2025-09-10 00:53:49.045456 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-10 00:53:49.045461 | orchestrator | Wednesday 10 September 2025 00:52:14 +0000 (0:00:00.705) 0:04:57.757 *** 2025-09-10 00:53:49.045467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-10 00:53:49.045472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-10 00:53:49.045478 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.045483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-10 00:53:49.045489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-10 00:53:49.045495 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.045500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-10 00:53:49.045505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-10 00:53:49.045511 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.045516 | orchestrator | 2025-09-10 00:53:49.045522 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-10 00:53:49.045527 | orchestrator | Wednesday 10 September 2025 00:52:16 +0000 (0:00:01.520) 0:04:59.278 *** 2025-09-10 00:53:49.045533 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.045538 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.045543 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.045549 | orchestrator | 2025-09-10 00:53:49.045554 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-10 00:53:49.045559 | orchestrator | Wednesday 10 September 2025 00:52:17 +0000 (0:00:01.426) 0:05:00.704 *** 2025-09-10 00:53:49.045569 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.045575 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.045580 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.045585 | orchestrator | 2025-09-10 00:53:49.045591 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-10 00:53:49.045596 | orchestrator | Wednesday 10 September 2025 00:52:19 +0000 (0:00:02.129) 0:05:02.834 *** 2025-09-10 00:53:49.045601 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.045607 | orchestrator | 2025-09-10 00:53:49.045612 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-10 00:53:49.045618 | orchestrator | Wednesday 10 September 2025 00:52:21 +0000 (0:00:01.384) 0:05:04.219 *** 2025-09-10 00:53:49.045643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:53:49.045650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:53:49.045656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:53:49.045662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:53:49.045691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:53:49.045698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:53:49.045705 | orchestrator | 2025-09-10 00:53:49.045710 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-10 00:53:49.045716 | orchestrator | Wednesday 10 September 2025 00:52:26 +0000 (0:00:05.512) 0:05:09.731 *** 2025-09-10 00:53:49.045721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:53:49.045727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:53:49.045737 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.045746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:53:49.045767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:53:49.045774 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.045779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:53:49.045785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:53:49.045795 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.045801 | orchestrator | 2025-09-10 00:53:49.045807 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-10 00:53:49.045812 | orchestrator | Wednesday 10 September 2025 00:52:27 +0000 (0:00:00.646) 0:05:10.377 *** 2025-09-10 00:53:49.045818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-10 00:53:49.045823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-10 00:53:49.045832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-10 00:53:49.045837 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.045843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-10 00:53:49.045863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-10 00:53:49.045870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-10 00:53:49.045875 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.045881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-10 00:53:49.045886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-10 00:53:49.045892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-10 00:53:49.045897 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.045903 | orchestrator | 2025-09-10 00:53:49.045908 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-10 00:53:49.045913 | orchestrator | Wednesday 10 September 2025 00:52:28 +0000 (0:00:00.880) 0:05:11.258 *** 2025-09-10 00:53:49.045919 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.045924 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.045930 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.045935 | orchestrator | 2025-09-10 00:53:49.045940 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-10 00:53:49.045949 | orchestrator | Wednesday 10 September 2025 00:52:29 +0000 (0:00:00.763) 0:05:12.021 *** 2025-09-10 00:53:49.045955 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.045960 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.045965 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.045970 | orchestrator | 2025-09-10 00:53:49.045976 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-10 00:53:49.045981 | orchestrator | Wednesday 10 September 2025 00:52:30 +0000 (0:00:01.287) 0:05:13.308 *** 2025-09-10 00:53:49.045987 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.045992 | orchestrator | 2025-09-10 00:53:49.045997 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-10 00:53:49.046003 | orchestrator | Wednesday 10 September 2025 00:52:31 +0000 (0:00:01.387) 0:05:14.695 *** 2025-09-10 00:53:49.046009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 00:53:49.046032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 00:53:49.046041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 00:53:49.046088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 00:53:49.046094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 00:53:49.046137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 00:53:49.046159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 00:53:49.046189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-10 00:53:49.046195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 00:53:49.046224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-10 00:53:49.046236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 00:53:49.046266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-10 00:53:49.046272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046295 | orchestrator | 2025-09-10 00:53:49.046300 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-10 00:53:49.046310 | orchestrator | Wednesday 10 September 2025 00:52:36 +0000 (0:00:04.540) 0:05:19.236 *** 2025-09-10 00:53:49.046316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-10 00:53:49.046322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 00:53:49.046327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-10 00:53:49.046360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-10 00:53:49.046366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-10 00:53:49.046372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 00:53:49.046378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046426 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-10 00:53:49.046438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-10 00:53:49.046443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046470 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-10 00:53:49.046482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 00:53:49.046487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-10 00:53:49.046532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-10 00:53:49.046538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 00:53:49.046549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 00:53:49.046555 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046561 | orchestrator | 2025-09-10 00:53:49.046566 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-10 00:53:49.046572 | orchestrator | Wednesday 10 September 2025 00:52:37 +0000 (0:00:01.223) 0:05:20.459 *** 2025-09-10 00:53:49.046578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-10 00:53:49.046583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-10 00:53:49.046593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-10 00:53:49.046599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-10 00:53:49.046607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-10 00:53:49.046616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-10 00:53:49.046622 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-10 00:53:49.046633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-10 00:53:49.046639 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-10 00:53:49.046650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-10 00:53:49.046656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-10 00:53:49.046661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-10 00:53:49.046667 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046672 | orchestrator | 2025-09-10 00:53:49.046678 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-10 00:53:49.046683 | orchestrator | Wednesday 10 September 2025 00:52:38 +0000 (0:00:01.038) 0:05:21.497 *** 2025-09-10 00:53:49.046689 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046694 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046699 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046705 | orchestrator | 2025-09-10 00:53:49.046710 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-10 00:53:49.046716 | orchestrator | Wednesday 10 September 2025 00:52:39 +0000 (0:00:00.437) 0:05:21.935 *** 2025-09-10 00:53:49.046721 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046726 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046732 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046737 | orchestrator | 2025-09-10 00:53:49.046746 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-10 00:53:49.046751 | orchestrator | Wednesday 10 September 2025 00:52:40 +0000 (0:00:01.411) 0:05:23.346 *** 2025-09-10 00:53:49.046757 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.046762 | orchestrator | 2025-09-10 00:53:49.046768 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-10 00:53:49.046773 | orchestrator | Wednesday 10 September 2025 00:52:42 +0000 (0:00:01.820) 0:05:25.167 *** 2025-09-10 00:53:49.046781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:53:49.046791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:53:49.046797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-10 00:53:49.046803 | orchestrator | 2025-09-10 00:53:49.046809 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-10 00:53:49.046814 | orchestrator | Wednesday 10 September 2025 00:52:44 +0000 (0:00:02.420) 0:05:27.588 *** 2025-09-10 00:53:49.046820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-10 00:53:49.046834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-10 00:53:49.046840 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046846 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-10 00:53:49.046861 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046866 | orchestrator | 2025-09-10 00:53:49.046871 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-10 00:53:49.046877 | orchestrator | Wednesday 10 September 2025 00:52:45 +0000 (0:00:00.379) 0:05:27.968 *** 2025-09-10 00:53:49.046882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-10 00:53:49.046888 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-10 00:53:49.046898 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-10 00:53:49.046909 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046914 | orchestrator | 2025-09-10 00:53:49.046920 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-10 00:53:49.046929 | orchestrator | Wednesday 10 September 2025 00:52:46 +0000 (0:00:01.036) 0:05:29.004 *** 2025-09-10 00:53:49.046935 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046940 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046945 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046951 | orchestrator | 2025-09-10 00:53:49.046956 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-10 00:53:49.046962 | orchestrator | Wednesday 10 September 2025 00:52:46 +0000 (0:00:00.431) 0:05:29.436 *** 2025-09-10 00:53:49.046967 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.046972 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.046978 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.046983 | orchestrator | 2025-09-10 00:53:49.046988 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-10 00:53:49.046994 | orchestrator | Wednesday 10 September 2025 00:52:47 +0000 (0:00:01.302) 0:05:30.738 *** 2025-09-10 00:53:49.046999 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:53:49.047004 | orchestrator | 2025-09-10 00:53:49.047010 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-10 00:53:49.047015 | orchestrator | Wednesday 10 September 2025 00:52:49 +0000 (0:00:01.783) 0:05:32.522 *** 2025-09-10 00:53:49.047021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.047032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.047038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.047049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.047055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.047064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-10 00:53:49.047069 | orchestrator | 2025-09-10 00:53:49.047077 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-10 00:53:49.047083 | orchestrator | Wednesday 10 September 2025 00:52:56 +0000 (0:00:06.667) 0:05:39.189 *** 2025-09-10 00:53:49.047089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.047099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.047105 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.047119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.047125 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.047153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-10 00:53:49.047159 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047165 | orchestrator | 2025-09-10 00:53:49.047170 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-10 00:53:49.047176 | orchestrator | Wednesday 10 September 2025 00:52:56 +0000 (0:00:00.714) 0:05:39.904 *** 2025-09-10 00:53:49.047182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047204 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047231 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-10 00:53:49.047270 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047276 | orchestrator | 2025-09-10 00:53:49.047281 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-10 00:53:49.047287 | orchestrator | Wednesday 10 September 2025 00:52:58 +0000 (0:00:01.688) 0:05:41.592 *** 2025-09-10 00:53:49.047292 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.047297 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.047303 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.047308 | orchestrator | 2025-09-10 00:53:49.047314 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-10 00:53:49.047319 | orchestrator | Wednesday 10 September 2025 00:53:00 +0000 (0:00:01.406) 0:05:42.999 *** 2025-09-10 00:53:49.047325 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.047330 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.047335 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.047341 | orchestrator | 2025-09-10 00:53:49.047346 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-10 00:53:49.047352 | orchestrator | Wednesday 10 September 2025 00:53:02 +0000 (0:00:02.235) 0:05:45.234 *** 2025-09-10 00:53:49.047357 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047363 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047368 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047373 | orchestrator | 2025-09-10 00:53:49.047379 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-10 00:53:49.047384 | orchestrator | Wednesday 10 September 2025 00:53:02 +0000 (0:00:00.360) 0:05:45.595 *** 2025-09-10 00:53:49.047390 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047395 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047400 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047406 | orchestrator | 2025-09-10 00:53:49.047411 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-10 00:53:49.047417 | orchestrator | Wednesday 10 September 2025 00:53:02 +0000 (0:00:00.316) 0:05:45.911 *** 2025-09-10 00:53:49.047422 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047428 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047433 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047438 | orchestrator | 2025-09-10 00:53:49.047444 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-10 00:53:49.047449 | orchestrator | Wednesday 10 September 2025 00:53:03 +0000 (0:00:00.675) 0:05:46.586 *** 2025-09-10 00:53:49.047455 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047460 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047465 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047471 | orchestrator | 2025-09-10 00:53:49.047476 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-10 00:53:49.047482 | orchestrator | Wednesday 10 September 2025 00:53:03 +0000 (0:00:00.314) 0:05:46.901 *** 2025-09-10 00:53:49.047487 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047492 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047498 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047503 | orchestrator | 2025-09-10 00:53:49.047509 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-10 00:53:49.047514 | orchestrator | Wednesday 10 September 2025 00:53:04 +0000 (0:00:00.333) 0:05:47.234 *** 2025-09-10 00:53:49.047519 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047525 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047530 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047535 | orchestrator | 2025-09-10 00:53:49.047545 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-10 00:53:49.047550 | orchestrator | Wednesday 10 September 2025 00:53:05 +0000 (0:00:00.869) 0:05:48.103 *** 2025-09-10 00:53:49.047556 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047561 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047566 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047572 | orchestrator | 2025-09-10 00:53:49.047577 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-10 00:53:49.047583 | orchestrator | Wednesday 10 September 2025 00:53:05 +0000 (0:00:00.682) 0:05:48.786 *** 2025-09-10 00:53:49.047588 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047594 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047599 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047604 | orchestrator | 2025-09-10 00:53:49.047610 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-10 00:53:49.047615 | orchestrator | Wednesday 10 September 2025 00:53:06 +0000 (0:00:00.380) 0:05:49.166 *** 2025-09-10 00:53:49.047621 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047626 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047631 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047637 | orchestrator | 2025-09-10 00:53:49.047642 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-10 00:53:49.047647 | orchestrator | Wednesday 10 September 2025 00:53:07 +0000 (0:00:00.983) 0:05:50.150 *** 2025-09-10 00:53:49.047653 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047661 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047666 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047672 | orchestrator | 2025-09-10 00:53:49.047677 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-10 00:53:49.047682 | orchestrator | Wednesday 10 September 2025 00:53:08 +0000 (0:00:01.257) 0:05:51.408 *** 2025-09-10 00:53:49.047688 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047693 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047701 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047706 | orchestrator | 2025-09-10 00:53:49.047712 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-10 00:53:49.047717 | orchestrator | Wednesday 10 September 2025 00:53:09 +0000 (0:00:00.865) 0:05:52.273 *** 2025-09-10 00:53:49.047722 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.047728 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.047733 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.047738 | orchestrator | 2025-09-10 00:53:49.047744 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-10 00:53:49.047749 | orchestrator | Wednesday 10 September 2025 00:53:18 +0000 (0:00:09.519) 0:06:01.793 *** 2025-09-10 00:53:49.047755 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047760 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047766 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047771 | orchestrator | 2025-09-10 00:53:49.047777 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-10 00:53:49.047782 | orchestrator | Wednesday 10 September 2025 00:53:19 +0000 (0:00:00.741) 0:06:02.535 *** 2025-09-10 00:53:49.047787 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.047793 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.047798 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.047804 | orchestrator | 2025-09-10 00:53:49.047809 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-10 00:53:49.047814 | orchestrator | Wednesday 10 September 2025 00:53:33 +0000 (0:00:13.414) 0:06:15.949 *** 2025-09-10 00:53:49.047820 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.047825 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.047830 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.047836 | orchestrator | 2025-09-10 00:53:49.047841 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-10 00:53:49.047847 | orchestrator | Wednesday 10 September 2025 00:53:34 +0000 (0:00:01.149) 0:06:17.098 *** 2025-09-10 00:53:49.047855 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:53:49.047861 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:53:49.047866 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:53:49.047872 | orchestrator | 2025-09-10 00:53:49.047877 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-10 00:53:49.047882 | orchestrator | Wednesday 10 September 2025 00:53:38 +0000 (0:00:04.606) 0:06:21.705 *** 2025-09-10 00:53:49.047888 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047893 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047899 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047904 | orchestrator | 2025-09-10 00:53:49.047909 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-10 00:53:49.047915 | orchestrator | Wednesday 10 September 2025 00:53:39 +0000 (0:00:00.364) 0:06:22.069 *** 2025-09-10 00:53:49.047920 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047925 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047931 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047936 | orchestrator | 2025-09-10 00:53:49.047941 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-10 00:53:49.047947 | orchestrator | Wednesday 10 September 2025 00:53:39 +0000 (0:00:00.367) 0:06:22.437 *** 2025-09-10 00:53:49.047952 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047958 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047963 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.047968 | orchestrator | 2025-09-10 00:53:49.047974 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-10 00:53:49.047979 | orchestrator | Wednesday 10 September 2025 00:53:40 +0000 (0:00:00.852) 0:06:23.289 *** 2025-09-10 00:53:49.047985 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.047990 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.047995 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.048001 | orchestrator | 2025-09-10 00:53:49.048006 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-10 00:53:49.048012 | orchestrator | Wednesday 10 September 2025 00:53:40 +0000 (0:00:00.354) 0:06:23.644 *** 2025-09-10 00:53:49.048017 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.048022 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.048028 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.048033 | orchestrator | 2025-09-10 00:53:49.048038 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-10 00:53:49.048044 | orchestrator | Wednesday 10 September 2025 00:53:41 +0000 (0:00:00.369) 0:06:24.014 *** 2025-09-10 00:53:49.048049 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:53:49.048055 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:53:49.048060 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:53:49.048066 | orchestrator | 2025-09-10 00:53:49.048071 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-10 00:53:49.048076 | orchestrator | Wednesday 10 September 2025 00:53:41 +0000 (0:00:00.339) 0:06:24.353 *** 2025-09-10 00:53:49.048082 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.048087 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.048093 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.048098 | orchestrator | 2025-09-10 00:53:49.048103 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-10 00:53:49.048109 | orchestrator | Wednesday 10 September 2025 00:53:46 +0000 (0:00:05.366) 0:06:29.720 *** 2025-09-10 00:53:49.048114 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:53:49.048120 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:53:49.048125 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:53:49.048130 | orchestrator | 2025-09-10 00:53:49.048136 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:53:49.048174 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-10 00:53:49.048189 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-10 00:53:49.048195 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-10 00:53:49.048201 | orchestrator | 2025-09-10 00:53:49.048206 | orchestrator | 2025-09-10 00:53:49.048215 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:53:49.048220 | orchestrator | Wednesday 10 September 2025 00:53:47 +0000 (0:00:00.852) 0:06:30.572 *** 2025-09-10 00:53:49.048226 | orchestrator | =============================================================================== 2025-09-10 00:53:49.048231 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.41s 2025-09-10 00:53:49.048236 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.52s 2025-09-10 00:53:49.048242 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.67s 2025-09-10 00:53:49.048247 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.13s 2025-09-10 00:53:49.048253 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.00s 2025-09-10 00:53:49.048258 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.51s 2025-09-10 00:53:49.048263 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.37s 2025-09-10 00:53:49.048269 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.24s 2025-09-10 00:53:49.048274 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.92s 2025-09-10 00:53:49.048279 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.88s 2025-09-10 00:53:49.048285 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.61s 2025-09-10 00:53:49.048290 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.54s 2025-09-10 00:53:49.048296 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.51s 2025-09-10 00:53:49.048301 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.47s 2025-09-10 00:53:49.048307 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.36s 2025-09-10 00:53:49.048312 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.26s 2025-09-10 00:53:49.048317 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.23s 2025-09-10 00:53:49.048323 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.16s 2025-09-10 00:53:49.048328 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.14s 2025-09-10 00:53:49.048334 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.09s 2025-09-10 00:53:49.048339 | orchestrator | 2025-09-10 00:53:49 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:49.048345 | orchestrator | 2025-09-10 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:52.088322 | orchestrator | 2025-09-10 00:53:52 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:53:52.090134 | orchestrator | 2025-09-10 00:53:52 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:52.093044 | orchestrator | 2025-09-10 00:53:52 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:53:52.093428 | orchestrator | 2025-09-10 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:55.127115 | orchestrator | 2025-09-10 00:53:55 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:53:55.127851 | orchestrator | 2025-09-10 00:53:55 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:55.128725 | orchestrator | 2025-09-10 00:53:55 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:53:55.129694 | orchestrator | 2025-09-10 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:53:58.158265 | orchestrator | 2025-09-10 00:53:58 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:53:58.158756 | orchestrator | 2025-09-10 00:53:58 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:53:58.159477 | orchestrator | 2025-09-10 00:53:58 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:53:58.159728 | orchestrator | 2025-09-10 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:01.207645 | orchestrator | 2025-09-10 00:54:01 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:01.211350 | orchestrator | 2025-09-10 00:54:01 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:01.212094 | orchestrator | 2025-09-10 00:54:01 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:01.216022 | orchestrator | 2025-09-10 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:04.247207 | orchestrator | 2025-09-10 00:54:04 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:04.249575 | orchestrator | 2025-09-10 00:54:04 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:04.249665 | orchestrator | 2025-09-10 00:54:04 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:04.249680 | orchestrator | 2025-09-10 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:07.294439 | orchestrator | 2025-09-10 00:54:07 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:07.294543 | orchestrator | 2025-09-10 00:54:07 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:07.294559 | orchestrator | 2025-09-10 00:54:07 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:07.294571 | orchestrator | 2025-09-10 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:10.329826 | orchestrator | 2025-09-10 00:54:10 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:10.329936 | orchestrator | 2025-09-10 00:54:10 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:10.330538 | orchestrator | 2025-09-10 00:54:10 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:10.330592 | orchestrator | 2025-09-10 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:13.368896 | orchestrator | 2025-09-10 00:54:13 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:13.371030 | orchestrator | 2025-09-10 00:54:13 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:13.371064 | orchestrator | 2025-09-10 00:54:13 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:13.371077 | orchestrator | 2025-09-10 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:16.416058 | orchestrator | 2025-09-10 00:54:16 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:16.417362 | orchestrator | 2025-09-10 00:54:16 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:16.418866 | orchestrator | 2025-09-10 00:54:16 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:16.418928 | orchestrator | 2025-09-10 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:19.457132 | orchestrator | 2025-09-10 00:54:19 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:19.459272 | orchestrator | 2025-09-10 00:54:19 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:19.461605 | orchestrator | 2025-09-10 00:54:19 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:19.461628 | orchestrator | 2025-09-10 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:22.508337 | orchestrator | 2025-09-10 00:54:22 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:22.508718 | orchestrator | 2025-09-10 00:54:22 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:22.509948 | orchestrator | 2025-09-10 00:54:22 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:22.510173 | orchestrator | 2025-09-10 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:25.553607 | orchestrator | 2025-09-10 00:54:25 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:25.554944 | orchestrator | 2025-09-10 00:54:25 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:25.556671 | orchestrator | 2025-09-10 00:54:25 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:25.556693 | orchestrator | 2025-09-10 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:28.605108 | orchestrator | 2025-09-10 00:54:28 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:28.607868 | orchestrator | 2025-09-10 00:54:28 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:28.611461 | orchestrator | 2025-09-10 00:54:28 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:28.611975 | orchestrator | 2025-09-10 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:31.659554 | orchestrator | 2025-09-10 00:54:31 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:31.661401 | orchestrator | 2025-09-10 00:54:31 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:31.663838 | orchestrator | 2025-09-10 00:54:31 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:31.664624 | orchestrator | 2025-09-10 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:34.712363 | orchestrator | 2025-09-10 00:54:34 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:34.714269 | orchestrator | 2025-09-10 00:54:34 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:34.718471 | orchestrator | 2025-09-10 00:54:34 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:34.718500 | orchestrator | 2025-09-10 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:37.762435 | orchestrator | 2025-09-10 00:54:37 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:37.763396 | orchestrator | 2025-09-10 00:54:37 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:37.765023 | orchestrator | 2025-09-10 00:54:37 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:37.765248 | orchestrator | 2025-09-10 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:40.814637 | orchestrator | 2025-09-10 00:54:40 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:40.815583 | orchestrator | 2025-09-10 00:54:40 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:40.817285 | orchestrator | 2025-09-10 00:54:40 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:40.817741 | orchestrator | 2025-09-10 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:43.853205 | orchestrator | 2025-09-10 00:54:43 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:43.854665 | orchestrator | 2025-09-10 00:54:43 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:43.855229 | orchestrator | 2025-09-10 00:54:43 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:43.855297 | orchestrator | 2025-09-10 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:46.893642 | orchestrator | 2025-09-10 00:54:46 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:46.895907 | orchestrator | 2025-09-10 00:54:46 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:46.897802 | orchestrator | 2025-09-10 00:54:46 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:46.898277 | orchestrator | 2025-09-10 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:49.942986 | orchestrator | 2025-09-10 00:54:49 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:49.945472 | orchestrator | 2025-09-10 00:54:49 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:49.947063 | orchestrator | 2025-09-10 00:54:49 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:49.947091 | orchestrator | 2025-09-10 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:52.997481 | orchestrator | 2025-09-10 00:54:52 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:52.999029 | orchestrator | 2025-09-10 00:54:52 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:53.007693 | orchestrator | 2025-09-10 00:54:53 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:53.007720 | orchestrator | 2025-09-10 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:56.055350 | orchestrator | 2025-09-10 00:54:56 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:56.056873 | orchestrator | 2025-09-10 00:54:56 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:56.059457 | orchestrator | 2025-09-10 00:54:56 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:56.059848 | orchestrator | 2025-09-10 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:54:59.110350 | orchestrator | 2025-09-10 00:54:59 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:54:59.110997 | orchestrator | 2025-09-10 00:54:59 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:54:59.112449 | orchestrator | 2025-09-10 00:54:59 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:54:59.112476 | orchestrator | 2025-09-10 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:02.159608 | orchestrator | 2025-09-10 00:55:02 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:02.161045 | orchestrator | 2025-09-10 00:55:02 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:02.162929 | orchestrator | 2025-09-10 00:55:02 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:02.162957 | orchestrator | 2025-09-10 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:05.209008 | orchestrator | 2025-09-10 00:55:05 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:05.211750 | orchestrator | 2025-09-10 00:55:05 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:05.214116 | orchestrator | 2025-09-10 00:55:05 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:05.214364 | orchestrator | 2025-09-10 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:08.264436 | orchestrator | 2025-09-10 00:55:08 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:08.266296 | orchestrator | 2025-09-10 00:55:08 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:08.269043 | orchestrator | 2025-09-10 00:55:08 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:08.269071 | orchestrator | 2025-09-10 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:11.311908 | orchestrator | 2025-09-10 00:55:11 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:11.313226 | orchestrator | 2025-09-10 00:55:11 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:11.314462 | orchestrator | 2025-09-10 00:55:11 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:11.314840 | orchestrator | 2025-09-10 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:14.362255 | orchestrator | 2025-09-10 00:55:14 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:14.363698 | orchestrator | 2025-09-10 00:55:14 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:14.366189 | orchestrator | 2025-09-10 00:55:14 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:14.366280 | orchestrator | 2025-09-10 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:17.406180 | orchestrator | 2025-09-10 00:55:17 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:17.408022 | orchestrator | 2025-09-10 00:55:17 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:17.409925 | orchestrator | 2025-09-10 00:55:17 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:17.411096 | orchestrator | 2025-09-10 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:20.455156 | orchestrator | 2025-09-10 00:55:20 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:20.456787 | orchestrator | 2025-09-10 00:55:20 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:20.458346 | orchestrator | 2025-09-10 00:55:20 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:20.458375 | orchestrator | 2025-09-10 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:23.507747 | orchestrator | 2025-09-10 00:55:23 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:23.509639 | orchestrator | 2025-09-10 00:55:23 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:23.510962 | orchestrator | 2025-09-10 00:55:23 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:23.510992 | orchestrator | 2025-09-10 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:26.558632 | orchestrator | 2025-09-10 00:55:26 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:26.559965 | orchestrator | 2025-09-10 00:55:26 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:26.562471 | orchestrator | 2025-09-10 00:55:26 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:26.562504 | orchestrator | 2025-09-10 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:29.614232 | orchestrator | 2025-09-10 00:55:29 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:29.616124 | orchestrator | 2025-09-10 00:55:29 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:29.617555 | orchestrator | 2025-09-10 00:55:29 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:29.617579 | orchestrator | 2025-09-10 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:32.671027 | orchestrator | 2025-09-10 00:55:32 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:32.672413 | orchestrator | 2025-09-10 00:55:32 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:32.676046 | orchestrator | 2025-09-10 00:55:32 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:32.676076 | orchestrator | 2025-09-10 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:35.719592 | orchestrator | 2025-09-10 00:55:35 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:35.722280 | orchestrator | 2025-09-10 00:55:35 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:35.724270 | orchestrator | 2025-09-10 00:55:35 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:35.724383 | orchestrator | 2025-09-10 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:38.771495 | orchestrator | 2025-09-10 00:55:38 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:38.772933 | orchestrator | 2025-09-10 00:55:38 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:38.774512 | orchestrator | 2025-09-10 00:55:38 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:38.774648 | orchestrator | 2025-09-10 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:41.820506 | orchestrator | 2025-09-10 00:55:41 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:41.821853 | orchestrator | 2025-09-10 00:55:41 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:41.823504 | orchestrator | 2025-09-10 00:55:41 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:41.823533 | orchestrator | 2025-09-10 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:44.869271 | orchestrator | 2025-09-10 00:55:44 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:44.870257 | orchestrator | 2025-09-10 00:55:44 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:44.871269 | orchestrator | 2025-09-10 00:55:44 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:44.871299 | orchestrator | 2025-09-10 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:47.920299 | orchestrator | 2025-09-10 00:55:47 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:47.922215 | orchestrator | 2025-09-10 00:55:47 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:47.924453 | orchestrator | 2025-09-10 00:55:47 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:47.924982 | orchestrator | 2025-09-10 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:50.979647 | orchestrator | 2025-09-10 00:55:50 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:50.981942 | orchestrator | 2025-09-10 00:55:50 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:50.984264 | orchestrator | 2025-09-10 00:55:50 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:50.984287 | orchestrator | 2025-09-10 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:54.046454 | orchestrator | 2025-09-10 00:55:54 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:54.046584 | orchestrator | 2025-09-10 00:55:54 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:54.046612 | orchestrator | 2025-09-10 00:55:54 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:54.046632 | orchestrator | 2025-09-10 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:55:57.101922 | orchestrator | 2025-09-10 00:55:57 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:55:57.102094 | orchestrator | 2025-09-10 00:55:57 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:55:57.102965 | orchestrator | 2025-09-10 00:55:57 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:55:57.102988 | orchestrator | 2025-09-10 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:00.168047 | orchestrator | 2025-09-10 00:56:00 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:00.170480 | orchestrator | 2025-09-10 00:56:00 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:56:00.172771 | orchestrator | 2025-09-10 00:56:00 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:00.172910 | orchestrator | 2025-09-10 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:03.219993 | orchestrator | 2025-09-10 00:56:03 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:03.221592 | orchestrator | 2025-09-10 00:56:03 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:56:03.225203 | orchestrator | 2025-09-10 00:56:03 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:03.225232 | orchestrator | 2025-09-10 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:06.273470 | orchestrator | 2025-09-10 00:56:06 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:06.274825 | orchestrator | 2025-09-10 00:56:06 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state STARTED 2025-09-10 00:56:06.277247 | orchestrator | 2025-09-10 00:56:06 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:06.277611 | orchestrator | 2025-09-10 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:09.333066 | orchestrator | 2025-09-10 00:56:09 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:09.339654 | orchestrator | 2025-09-10 00:56:09 | INFO  | Task 6325b0b6-50f4-49f6-8ed2-7fc0af0a8ff4 is in state SUCCESS 2025-09-10 00:56:09.342317 | orchestrator | 2025-09-10 00:56:09.342350 | orchestrator | 2025-09-10 00:56:09.342362 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-10 00:56:09.342374 | orchestrator | 2025-09-10 00:56:09.342436 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-10 00:56:09.342559 | orchestrator | Wednesday 10 September 2025 00:44:46 +0000 (0:00:00.673) 0:00:00.673 *** 2025-09-10 00:56:09.342584 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.342597 | orchestrator | 2025-09-10 00:56:09.342608 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-10 00:56:09.342619 | orchestrator | Wednesday 10 September 2025 00:44:47 +0000 (0:00:01.027) 0:00:01.701 *** 2025-09-10 00:56:09.342630 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.342642 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.342653 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.342664 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.342675 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.342686 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.342696 | orchestrator | 2025-09-10 00:56:09.342707 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-10 00:56:09.342718 | orchestrator | Wednesday 10 September 2025 00:44:48 +0000 (0:00:01.624) 0:00:03.325 *** 2025-09-10 00:56:09.342729 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.342740 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.342751 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.342762 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.342773 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.342783 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.342794 | orchestrator | 2025-09-10 00:56:09.342804 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-10 00:56:09.342815 | orchestrator | Wednesday 10 September 2025 00:44:49 +0000 (0:00:00.761) 0:00:04.087 *** 2025-09-10 00:56:09.342937 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.342952 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.342999 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.343013 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.343026 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.343045 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.343059 | orchestrator | 2025-09-10 00:56:09.343072 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-10 00:56:09.343084 | orchestrator | Wednesday 10 September 2025 00:44:50 +0000 (0:00:00.885) 0:00:04.973 *** 2025-09-10 00:56:09.343097 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.343109 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.343122 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.343134 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.343146 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.343159 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.343171 | orchestrator | 2025-09-10 00:56:09.343263 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-10 00:56:09.343302 | orchestrator | Wednesday 10 September 2025 00:44:51 +0000 (0:00:00.735) 0:00:05.709 *** 2025-09-10 00:56:09.343314 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.343324 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.343355 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.343366 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.343377 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.343411 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.343422 | orchestrator | 2025-09-10 00:56:09.343434 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-10 00:56:09.343445 | orchestrator | Wednesday 10 September 2025 00:44:51 +0000 (0:00:00.634) 0:00:06.343 *** 2025-09-10 00:56:09.343456 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.343466 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.343477 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.343487 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.343498 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.343509 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.343519 | orchestrator | 2025-09-10 00:56:09.343530 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-10 00:56:09.343541 | orchestrator | Wednesday 10 September 2025 00:44:52 +0000 (0:00:00.929) 0:00:07.273 *** 2025-09-10 00:56:09.343552 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.343563 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.343574 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.343585 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.343596 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.343606 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.343617 | orchestrator | 2025-09-10 00:56:09.343627 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-10 00:56:09.343638 | orchestrator | Wednesday 10 September 2025 00:44:53 +0000 (0:00:00.737) 0:00:08.010 *** 2025-09-10 00:56:09.343649 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.343660 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.343671 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.343681 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.343692 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.343702 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.343713 | orchestrator | 2025-09-10 00:56:09.343724 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-10 00:56:09.343734 | orchestrator | Wednesday 10 September 2025 00:44:54 +0000 (0:00:00.828) 0:00:08.838 *** 2025-09-10 00:56:09.343745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:56:09.343756 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:56:09.343767 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:56:09.343777 | orchestrator | 2025-09-10 00:56:09.343788 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-10 00:56:09.343799 | orchestrator | Wednesday 10 September 2025 00:44:54 +0000 (0:00:00.532) 0:00:09.371 *** 2025-09-10 00:56:09.343810 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.343821 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.343831 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.343842 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.343852 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.343863 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.343873 | orchestrator | 2025-09-10 00:56:09.343900 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-10 00:56:09.343911 | orchestrator | Wednesday 10 September 2025 00:44:55 +0000 (0:00:01.066) 0:00:10.438 *** 2025-09-10 00:56:09.343958 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:56:09.343970 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:56:09.343980 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:56:09.343991 | orchestrator | 2025-09-10 00:56:09.344002 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-10 00:56:09.344021 | orchestrator | Wednesday 10 September 2025 00:44:58 +0000 (0:00:02.730) 0:00:13.168 *** 2025-09-10 00:56:09.344032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-10 00:56:09.344043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-10 00:56:09.344054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-10 00:56:09.344065 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.344075 | orchestrator | 2025-09-10 00:56:09.344086 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-10 00:56:09.344097 | orchestrator | Wednesday 10 September 2025 00:44:59 +0000 (0:00:00.759) 0:00:13.928 *** 2025-09-10 00:56:09.344183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344240 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.344251 | orchestrator | 2025-09-10 00:56:09.344261 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-10 00:56:09.344272 | orchestrator | Wednesday 10 September 2025 00:45:00 +0000 (0:00:01.183) 0:00:15.111 *** 2025-09-10 00:56:09.344286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344322 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.344333 | orchestrator | 2025-09-10 00:56:09.344344 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-10 00:56:09.344355 | orchestrator | Wednesday 10 September 2025 00:45:00 +0000 (0:00:00.290) 0:00:15.402 *** 2025-09-10 00:56:09.344376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-10 00:44:56.336961', 'end': '2025-09-10 00:44:56.593805', 'delta': '0:00:00.256844', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-10 00:44:57.074122', 'end': '2025-09-10 00:44:57.341670', 'delta': '0:00:00.267548', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-10 00:44:58.068566', 'end': '2025-09-10 00:44:58.362366', 'delta': '0:00:00.293800', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.344490 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.344501 | orchestrator | 2025-09-10 00:56:09.344512 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-10 00:56:09.344523 | orchestrator | Wednesday 10 September 2025 00:45:01 +0000 (0:00:00.285) 0:00:15.688 *** 2025-09-10 00:56:09.344534 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.344544 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.344555 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.344566 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.344577 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.344587 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.344598 | orchestrator | 2025-09-10 00:56:09.344609 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-10 00:56:09.344620 | orchestrator | Wednesday 10 September 2025 00:45:02 +0000 (0:00:01.083) 0:00:16.771 *** 2025-09-10 00:56:09.344631 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.344642 | orchestrator | 2025-09-10 00:56:09.344653 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-10 00:56:09.344663 | orchestrator | Wednesday 10 September 2025 00:45:02 +0000 (0:00:00.626) 0:00:17.398 *** 2025-09-10 00:56:09.344674 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.344685 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.344696 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.344707 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.344718 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.344728 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.344739 | orchestrator | 2025-09-10 00:56:09.344750 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-10 00:56:09.344761 | orchestrator | Wednesday 10 September 2025 00:45:04 +0000 (0:00:01.777) 0:00:19.175 *** 2025-09-10 00:56:09.344960 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.344970 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.344979 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.344989 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.344998 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345008 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345017 | orchestrator | 2025-09-10 00:56:09.345027 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-10 00:56:09.345044 | orchestrator | Wednesday 10 September 2025 00:45:06 +0000 (0:00:01.626) 0:00:20.801 *** 2025-09-10 00:56:09.345054 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345063 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345073 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345082 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345092 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345101 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345110 | orchestrator | 2025-09-10 00:56:09.345120 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-10 00:56:09.345130 | orchestrator | Wednesday 10 September 2025 00:45:07 +0000 (0:00:01.482) 0:00:22.284 *** 2025-09-10 00:56:09.345139 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345149 | orchestrator | 2025-09-10 00:56:09.345233 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-10 00:56:09.345244 | orchestrator | Wednesday 10 September 2025 00:45:07 +0000 (0:00:00.179) 0:00:22.464 *** 2025-09-10 00:56:09.345254 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345263 | orchestrator | 2025-09-10 00:56:09.345273 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-10 00:56:09.345283 | orchestrator | Wednesday 10 September 2025 00:45:09 +0000 (0:00:01.262) 0:00:23.727 *** 2025-09-10 00:56:09.345292 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345302 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345311 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345321 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345331 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345341 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345374 | orchestrator | 2025-09-10 00:56:09.345406 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-10 00:56:09.345417 | orchestrator | Wednesday 10 September 2025 00:45:10 +0000 (0:00:01.351) 0:00:25.078 *** 2025-09-10 00:56:09.345448 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345458 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345468 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345478 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345487 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345497 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345506 | orchestrator | 2025-09-10 00:56:09.345516 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-10 00:56:09.345525 | orchestrator | Wednesday 10 September 2025 00:45:11 +0000 (0:00:00.992) 0:00:26.070 *** 2025-09-10 00:56:09.345535 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345544 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345553 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345563 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345572 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345582 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345591 | orchestrator | 2025-09-10 00:56:09.345601 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-10 00:56:09.345610 | orchestrator | Wednesday 10 September 2025 00:45:12 +0000 (0:00:00.838) 0:00:26.908 *** 2025-09-10 00:56:09.345619 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345629 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345638 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345648 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345657 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345666 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345676 | orchestrator | 2025-09-10 00:56:09.345685 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-10 00:56:09.345695 | orchestrator | Wednesday 10 September 2025 00:45:13 +0000 (0:00:01.244) 0:00:28.152 *** 2025-09-10 00:56:09.345712 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345722 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345731 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345746 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345756 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345765 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345775 | orchestrator | 2025-09-10 00:56:09.345784 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-10 00:56:09.345794 | orchestrator | Wednesday 10 September 2025 00:45:14 +0000 (0:00:01.198) 0:00:29.351 *** 2025-09-10 00:56:09.345803 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.345813 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.345927 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.345937 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.345947 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.345957 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.345966 | orchestrator | 2025-09-10 00:56:09.345976 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-10 00:56:09.345985 | orchestrator | Wednesday 10 September 2025 00:45:15 +0000 (0:00:00.826) 0:00:30.177 *** 2025-09-10 00:56:09.345995 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.346005 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.346014 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.346084 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.346094 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.346103 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.346113 | orchestrator | 2025-09-10 00:56:09.346122 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-10 00:56:09.346132 | orchestrator | Wednesday 10 September 2025 00:45:16 +0000 (0:00:00.550) 0:00:30.728 *** 2025-09-10 00:56:09.346225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea', 'dm-uuid-LVM-uE5Yjf2CsxkFgHgIpbKsPiyHm2TurikN3S280Dz2nod0tzAwO5S1pyjk2inle8Pf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f', 'dm-uuid-LVM-ZI1l2hrd5ozIdIPbGSORiFKfU4pLhNqBcQz7LQPLKX2159t3r3EwXsxnR1q2MZN6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a', 'dm-uuid-LVM-2hB5Q2a5udGrsgyYcPzLEBRo1qDiEpux1eUKdeDY1QiWJ6egp5CTMxBgXWbEK8V7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.346431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YtsAg8-uQKK-fp6U-2Eoq-WWNO-UgLL-3GllCz', 'scsi-0QEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6', 'scsi-SQEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.346442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca', 'dm-uuid-LVM-k27w3X3DUUX1XZAerGiKa0AfnUAShbWdavK2lWdW2BR1Va40lsJhz6WVU6V9WMmo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1at0Au-Fg2h-xPI3-6SzS-AeD9-5EM6-mEzMll', 'scsi-0QEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd', 'scsi-SQEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.346471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757', 'scsi-SQEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.346494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.346505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346529 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.346539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7', 'dm-uuid-LVM-UcEdMXyLpheVryiFjGGikHOHzacaQqtC6drg4fiUEBxjwrRysilyddDiBDve0Xzr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466', 'dm-uuid-LVM-yNmKPiSdCRM90Ij0ZxbCNNJY3U3c3fnFtFxM8rdvExiBtaoR2TkcVtvQsk8io0dz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.346686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.347798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.347886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.347902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.347931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.347943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.347976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56OBMz-I7EX-VavL-tZwu-3Gki-M3Zl-yWbhOp', 'scsi-0QEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00', 'scsi-SQEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part1', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part14', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part15', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part16', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d25IX1-hlTn-ZBV2-xwSV-H1En-zkyU-H6wSTA', 'scsi-0QEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e', 'scsi-SQEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348234 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.348246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb', 'scsi-SQEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W5LkY6-eCu4-yrK5-dhKk-hJo8-SW0n-F6I41f', 'scsi-0QEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c', 'scsi-SQEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0R9dBU-VzRA-BtuH-Y8EJ-iZ82-XXeb-Hwxfe0', 'scsi-0QEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901', 'scsi-SQEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348429 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.348443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c', 'scsi-SQEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348489 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.348501 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.348516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:56:09.348645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part1', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part14', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part15', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part16', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:56:09.348687 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.348699 | orchestrator | 2025-09-10 00:56:09.348710 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-10 00:56:09.348722 | orchestrator | Wednesday 10 September 2025 00:45:17 +0000 (0:00:01.627) 0:00:32.356 *** 2025-09-10 00:56:09.348734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea', 'dm-uuid-LVM-uE5Yjf2CsxkFgHgIpbKsPiyHm2TurikN3S280Dz2nod0tzAwO5S1pyjk2inle8Pf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f', 'dm-uuid-LVM-ZI1l2hrd5ozIdIPbGSORiFKfU4pLhNqBcQz7LQPLKX2159t3r3EwXsxnR1q2MZN6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YtsAg8-uQKK-fp6U-2Eoq-WWNO-UgLL-3GllCz', 'scsi-0QEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6', 'scsi-SQEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a', 'dm-uuid-LVM-2hB5Q2a5udGrsgyYcPzLEBRo1qDiEpux1eUKdeDY1QiWJ6egp5CTMxBgXWbEK8V7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1at0Au-Fg2h-xPI3-6SzS-AeD9-5EM6-mEzMll', 'scsi-0QEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd', 'scsi-SQEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca', 'dm-uuid-LVM-k27w3X3DUUX1XZAerGiKa0AfnUAShbWdavK2lWdW2BR1Va40lsJhz6WVU6V9WMmo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757', 'scsi-SQEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.348991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7', 'dm-uuid-LVM-UcEdMXyLpheVryiFjGGikHOHzacaQqtC6drg4fiUEBxjwrRysilyddDiBDve0Xzr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466', 'dm-uuid-LVM-yNmKPiSdCRM90Ij0ZxbCNNJY3U3c3fnFtFxM8rdvExiBtaoR2TkcVtvQsk8io0dz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349121 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.349133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349207 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349218 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349260 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349276 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349328 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349435 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W5LkY6-eCu4-yrK5-dhKk-hJo8-SW0n-F6I41f', 'scsi-0QEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c', 'scsi-SQEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0R9dBU-VzRA-BtuH-Y8EJ-iZ82-XXeb-Hwxfe0', 'scsi-0QEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901', 'scsi-SQEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c', 'scsi-SQEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349708 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349720 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part1', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part14', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part15', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part16', 'scsi-SQEMU_QEMU_HARDDISK_7eb4c3bb-4118-4e06-ba0b-43b94bfc7779-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56OBMz-I7EX-VavL-tZwu-3Gki-M3Zl-yWbhOp', 'scsi-0QEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00', 'scsi-SQEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d25IX1-hlTn-ZBV2-xwSV-H1En-zkyU-H6wSTA', 'scsi-0QEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e', 'scsi-SQEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb', 'scsi-SQEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349789 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.349804 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349814 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349830 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349844 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349854 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349864 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349874 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349890 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349900 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349922 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea0201e7-6d46-42f0-8fca-6e90e492352f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349933 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349943 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.349953 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.349963 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.349978 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.349996 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350010 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350049 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350059 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350071 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350087 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350104 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350120 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part1', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part14', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part15', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part16', 'scsi-SQEMU_QEMU_HARDDISK_42f8eac2-f52b-494e-b151-a2efb4a40d85-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350132 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:56:09.350142 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.350152 | orchestrator | 2025-09-10 00:56:09.350171 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-10 00:56:09.350181 | orchestrator | Wednesday 10 September 2025 00:45:19 +0000 (0:00:01.539) 0:00:33.896 *** 2025-09-10 00:56:09.350195 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.350206 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.350215 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.350224 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.350234 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.350244 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.350255 | orchestrator | 2025-09-10 00:56:09.350266 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-10 00:56:09.350278 | orchestrator | Wednesday 10 September 2025 00:45:20 +0000 (0:00:01.344) 0:00:35.240 *** 2025-09-10 00:56:09.350289 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.350300 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.350311 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.350322 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.350332 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.350343 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.350354 | orchestrator | 2025-09-10 00:56:09.350365 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-10 00:56:09.350376 | orchestrator | Wednesday 10 September 2025 00:45:21 +0000 (0:00:00.953) 0:00:36.194 *** 2025-09-10 00:56:09.350405 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.350417 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.350429 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.350440 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.350450 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.350461 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.350472 | orchestrator | 2025-09-10 00:56:09.350483 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-10 00:56:09.350494 | orchestrator | Wednesday 10 September 2025 00:45:23 +0000 (0:00:01.689) 0:00:37.883 *** 2025-09-10 00:56:09.350506 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.350517 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.350528 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.350539 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.350550 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.350561 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.350572 | orchestrator | 2025-09-10 00:56:09.350584 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-10 00:56:09.350600 | orchestrator | Wednesday 10 September 2025 00:45:24 +0000 (0:00:00.924) 0:00:38.807 *** 2025-09-10 00:56:09.350610 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.350619 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.350629 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.350638 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.350647 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.350657 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.350666 | orchestrator | 2025-09-10 00:56:09.350676 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-10 00:56:09.350686 | orchestrator | Wednesday 10 September 2025 00:45:25 +0000 (0:00:00.848) 0:00:39.655 *** 2025-09-10 00:56:09.350695 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.350705 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.350714 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.350724 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.350733 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.350743 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.350752 | orchestrator | 2025-09-10 00:56:09.350762 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-10 00:56:09.350771 | orchestrator | Wednesday 10 September 2025 00:45:26 +0000 (0:00:01.078) 0:00:40.734 *** 2025-09-10 00:56:09.350781 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-10 00:56:09.350798 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-10 00:56:09.350808 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-10 00:56:09.350817 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-10 00:56:09.350827 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-10 00:56:09.350836 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-10 00:56:09.350846 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-10 00:56:09.350855 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-10 00:56:09.350865 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-10 00:56:09.350874 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-10 00:56:09.350884 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-10 00:56:09.350893 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-10 00:56:09.350902 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-10 00:56:09.350912 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-10 00:56:09.350921 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-10 00:56:09.350931 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-10 00:56:09.350940 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-10 00:56:09.350949 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-10 00:56:09.350959 | orchestrator | 2025-09-10 00:56:09.350968 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-10 00:56:09.350978 | orchestrator | Wednesday 10 September 2025 00:45:30 +0000 (0:00:04.155) 0:00:44.890 *** 2025-09-10 00:56:09.350987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-10 00:56:09.350997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-10 00:56:09.351007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-10 00:56:09.351016 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351025 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-10 00:56:09.351035 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-10 00:56:09.351044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-10 00:56:09.351054 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-10 00:56:09.351063 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-10 00:56:09.351073 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-10 00:56:09.351087 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.351097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-10 00:56:09.351106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-10 00:56:09.351116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-10 00:56:09.351125 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.351135 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-10 00:56:09.351145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-10 00:56:09.351154 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-10 00:56:09.351164 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.351173 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.351183 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-10 00:56:09.351192 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-10 00:56:09.351202 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-10 00:56:09.351211 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.351221 | orchestrator | 2025-09-10 00:56:09.351230 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-10 00:56:09.351240 | orchestrator | Wednesday 10 September 2025 00:45:31 +0000 (0:00:00.823) 0:00:45.713 *** 2025-09-10 00:56:09.351256 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.351265 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.351275 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.351285 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.351295 | orchestrator | 2025-09-10 00:56:09.351305 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-10 00:56:09.351315 | orchestrator | Wednesday 10 September 2025 00:45:32 +0000 (0:00:01.447) 0:00:47.161 *** 2025-09-10 00:56:09.351325 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351334 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.351348 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.351357 | orchestrator | 2025-09-10 00:56:09.351367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-10 00:56:09.351377 | orchestrator | Wednesday 10 September 2025 00:45:32 +0000 (0:00:00.329) 0:00:47.490 *** 2025-09-10 00:56:09.351399 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351409 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.351419 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.351429 | orchestrator | 2025-09-10 00:56:09.351438 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-10 00:56:09.351448 | orchestrator | Wednesday 10 September 2025 00:45:33 +0000 (0:00:00.581) 0:00:48.072 *** 2025-09-10 00:56:09.351458 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351467 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.351477 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.351487 | orchestrator | 2025-09-10 00:56:09.351496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-10 00:56:09.351506 | orchestrator | Wednesday 10 September 2025 00:45:34 +0000 (0:00:00.667) 0:00:48.740 *** 2025-09-10 00:56:09.351516 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.351525 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.351535 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.351545 | orchestrator | 2025-09-10 00:56:09.351554 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-10 00:56:09.351564 | orchestrator | Wednesday 10 September 2025 00:45:34 +0000 (0:00:00.805) 0:00:49.545 *** 2025-09-10 00:56:09.351573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.351583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.351592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.351602 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351611 | orchestrator | 2025-09-10 00:56:09.351621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-10 00:56:09.351630 | orchestrator | Wednesday 10 September 2025 00:45:35 +0000 (0:00:00.610) 0:00:50.155 *** 2025-09-10 00:56:09.351640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.351649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.351659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.351669 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351678 | orchestrator | 2025-09-10 00:56:09.351687 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-10 00:56:09.351697 | orchestrator | Wednesday 10 September 2025 00:45:35 +0000 (0:00:00.375) 0:00:50.531 *** 2025-09-10 00:56:09.351707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.351716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.351726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.351735 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.351745 | orchestrator | 2025-09-10 00:56:09.351754 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-10 00:56:09.351770 | orchestrator | Wednesday 10 September 2025 00:45:36 +0000 (0:00:00.825) 0:00:51.356 *** 2025-09-10 00:56:09.351779 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.351789 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.351799 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.351808 | orchestrator | 2025-09-10 00:56:09.351818 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-10 00:56:09.351828 | orchestrator | Wednesday 10 September 2025 00:45:37 +0000 (0:00:00.457) 0:00:51.813 *** 2025-09-10 00:56:09.351837 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-10 00:56:09.351847 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-10 00:56:09.351857 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-10 00:56:09.351866 | orchestrator | 2025-09-10 00:56:09.351880 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-10 00:56:09.351890 | orchestrator | Wednesday 10 September 2025 00:45:37 +0000 (0:00:00.652) 0:00:52.466 *** 2025-09-10 00:56:09.351900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:56:09.351910 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:56:09.351919 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:56:09.351929 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-10 00:56:09.351938 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-10 00:56:09.351948 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-10 00:56:09.351957 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-10 00:56:09.351967 | orchestrator | 2025-09-10 00:56:09.351976 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-10 00:56:09.351985 | orchestrator | Wednesday 10 September 2025 00:45:39 +0000 (0:00:01.443) 0:00:53.910 *** 2025-09-10 00:56:09.351995 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:56:09.352004 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:56:09.352014 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:56:09.352023 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-10 00:56:09.352032 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-10 00:56:09.352042 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-10 00:56:09.352051 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-10 00:56:09.352061 | orchestrator | 2025-09-10 00:56:09.352074 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.352084 | orchestrator | Wednesday 10 September 2025 00:45:41 +0000 (0:00:01.856) 0:00:55.766 *** 2025-09-10 00:56:09.352094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.352104 | orchestrator | 2025-09-10 00:56:09.352113 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.352123 | orchestrator | Wednesday 10 September 2025 00:45:42 +0000 (0:00:01.169) 0:00:56.936 *** 2025-09-10 00:56:09.352133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.352142 | orchestrator | 2025-09-10 00:56:09.352152 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.352161 | orchestrator | Wednesday 10 September 2025 00:45:43 +0000 (0:00:01.417) 0:00:58.354 *** 2025-09-10 00:56:09.352170 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.352186 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.352195 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.352205 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.352214 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.352224 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.352233 | orchestrator | 2025-09-10 00:56:09.352243 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.352253 | orchestrator | Wednesday 10 September 2025 00:45:45 +0000 (0:00:02.000) 0:01:00.355 *** 2025-09-10 00:56:09.352262 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.352272 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.352281 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.352291 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.352301 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.352310 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.352320 | orchestrator | 2025-09-10 00:56:09.352329 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.352339 | orchestrator | Wednesday 10 September 2025 00:45:47 +0000 (0:00:01.780) 0:01:02.135 *** 2025-09-10 00:56:09.352349 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.352358 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.352368 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.352377 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.352434 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.352444 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.352454 | orchestrator | 2025-09-10 00:56:09.352464 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.352473 | orchestrator | Wednesday 10 September 2025 00:45:48 +0000 (0:00:01.432) 0:01:03.568 *** 2025-09-10 00:56:09.352483 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.352493 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.352502 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.352512 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.352521 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.352531 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.352540 | orchestrator | 2025-09-10 00:56:09.352550 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.352560 | orchestrator | Wednesday 10 September 2025 00:45:50 +0000 (0:00:01.250) 0:01:04.818 *** 2025-09-10 00:56:09.352569 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.352579 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.352589 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.352598 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.352608 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.352618 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.352627 | orchestrator | 2025-09-10 00:56:09.352637 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.352652 | orchestrator | Wednesday 10 September 2025 00:45:51 +0000 (0:00:01.643) 0:01:06.462 *** 2025-09-10 00:56:09.352662 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.352672 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.352682 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.352691 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.352701 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.352710 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.352720 | orchestrator | 2025-09-10 00:56:09.352729 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.352739 | orchestrator | Wednesday 10 September 2025 00:45:52 +0000 (0:00:00.989) 0:01:07.452 *** 2025-09-10 00:56:09.352748 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.352758 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.352767 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.352777 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.352786 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.352802 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.352811 | orchestrator | 2025-09-10 00:56:09.352821 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.352830 | orchestrator | Wednesday 10 September 2025 00:45:53 +0000 (0:00:00.858) 0:01:08.310 *** 2025-09-10 00:56:09.352840 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.352849 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.352859 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.352868 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.352878 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.352887 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.352896 | orchestrator | 2025-09-10 00:56:09.352906 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.352914 | orchestrator | Wednesday 10 September 2025 00:45:55 +0000 (0:00:01.737) 0:01:10.048 *** 2025-09-10 00:56:09.352922 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.352930 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.352938 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.352945 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.352953 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.352961 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.352968 | orchestrator | 2025-09-10 00:56:09.352980 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.352988 | orchestrator | Wednesday 10 September 2025 00:45:56 +0000 (0:00:01.149) 0:01:11.197 *** 2025-09-10 00:56:09.352996 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.353004 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.353012 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.353019 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353027 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353035 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353043 | orchestrator | 2025-09-10 00:56:09.353050 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.353058 | orchestrator | Wednesday 10 September 2025 00:45:57 +0000 (0:00:01.204) 0:01:12.402 *** 2025-09-10 00:56:09.353066 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.353074 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.353082 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.353089 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.353097 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.353105 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.353113 | orchestrator | 2025-09-10 00:56:09.353120 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.353128 | orchestrator | Wednesday 10 September 2025 00:45:58 +0000 (0:00:00.870) 0:01:13.273 *** 2025-09-10 00:56:09.353136 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.353144 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.353152 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.353159 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353167 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353175 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353183 | orchestrator | 2025-09-10 00:56:09.353190 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.353198 | orchestrator | Wednesday 10 September 2025 00:45:59 +0000 (0:00:01.307) 0:01:14.581 *** 2025-09-10 00:56:09.353206 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.353214 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.353222 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.353229 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353237 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353245 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353252 | orchestrator | 2025-09-10 00:56:09.353263 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.353276 | orchestrator | Wednesday 10 September 2025 00:46:00 +0000 (0:00:00.737) 0:01:15.318 *** 2025-09-10 00:56:09.353291 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.353299 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.353307 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.353315 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353322 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353330 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353338 | orchestrator | 2025-09-10 00:56:09.353345 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.353353 | orchestrator | Wednesday 10 September 2025 00:46:01 +0000 (0:00:01.000) 0:01:16.319 *** 2025-09-10 00:56:09.353361 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.353369 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.353376 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.353397 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353405 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353413 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353421 | orchestrator | 2025-09-10 00:56:09.353429 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.353436 | orchestrator | Wednesday 10 September 2025 00:46:02 +0000 (0:00:00.833) 0:01:17.152 *** 2025-09-10 00:56:09.353444 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.353452 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.353459 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.353467 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353475 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353482 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353490 | orchestrator | 2025-09-10 00:56:09.353502 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.353510 | orchestrator | Wednesday 10 September 2025 00:46:03 +0000 (0:00:01.112) 0:01:18.265 *** 2025-09-10 00:56:09.353518 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.353525 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.353533 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.353541 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.353549 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.353556 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.353564 | orchestrator | 2025-09-10 00:56:09.353572 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.353580 | orchestrator | Wednesday 10 September 2025 00:46:04 +0000 (0:00:00.840) 0:01:19.106 *** 2025-09-10 00:56:09.353588 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.353596 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.353603 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.353611 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.353619 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.353626 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.353634 | orchestrator | 2025-09-10 00:56:09.353642 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.353650 | orchestrator | Wednesday 10 September 2025 00:46:05 +0000 (0:00:01.055) 0:01:20.162 *** 2025-09-10 00:56:09.353658 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.353665 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.353673 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.353681 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.353688 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.353696 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.353704 | orchestrator | 2025-09-10 00:56:09.353712 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-10 00:56:09.353719 | orchestrator | Wednesday 10 September 2025 00:46:06 +0000 (0:00:01.417) 0:01:21.580 *** 2025-09-10 00:56:09.353727 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.353735 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.353743 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.353751 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.353763 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.353771 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.353779 | orchestrator | 2025-09-10 00:56:09.353787 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-10 00:56:09.353795 | orchestrator | Wednesday 10 September 2025 00:46:08 +0000 (0:00:01.710) 0:01:23.290 *** 2025-09-10 00:56:09.353803 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.353811 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.353818 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.353826 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.353834 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.353842 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.353849 | orchestrator | 2025-09-10 00:56:09.353857 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-10 00:56:09.353865 | orchestrator | Wednesday 10 September 2025 00:46:10 +0000 (0:00:02.221) 0:01:25.512 *** 2025-09-10 00:56:09.353873 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.353881 | orchestrator | 2025-09-10 00:56:09.353889 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-10 00:56:09.353897 | orchestrator | Wednesday 10 September 2025 00:46:12 +0000 (0:00:01.167) 0:01:26.679 *** 2025-09-10 00:56:09.353905 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.353912 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.353920 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.353928 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.353936 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.353943 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.353951 | orchestrator | 2025-09-10 00:56:09.353959 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-10 00:56:09.353993 | orchestrator | Wednesday 10 September 2025 00:46:12 +0000 (0:00:00.588) 0:01:27.267 *** 2025-09-10 00:56:09.354002 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354009 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354038 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354046 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354056 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.354063 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.354072 | orchestrator | 2025-09-10 00:56:09.354080 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-10 00:56:09.354088 | orchestrator | Wednesday 10 September 2025 00:46:13 +0000 (0:00:00.798) 0:01:28.066 *** 2025-09-10 00:56:09.354096 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-10 00:56:09.354103 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-10 00:56:09.354111 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-10 00:56:09.354119 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-10 00:56:09.354127 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-10 00:56:09.354134 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-10 00:56:09.354142 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-10 00:56:09.354150 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-10 00:56:09.354158 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-10 00:56:09.354166 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-10 00:56:09.354173 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-10 00:56:09.354191 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-10 00:56:09.354199 | orchestrator | 2025-09-10 00:56:09.354207 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-10 00:56:09.354215 | orchestrator | Wednesday 10 September 2025 00:46:14 +0000 (0:00:01.368) 0:01:29.435 *** 2025-09-10 00:56:09.354223 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.354231 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.354239 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.354247 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.354267 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.354275 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.354283 | orchestrator | 2025-09-10 00:56:09.354291 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-10 00:56:09.354299 | orchestrator | Wednesday 10 September 2025 00:46:16 +0000 (0:00:01.236) 0:01:30.672 *** 2025-09-10 00:56:09.354306 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354314 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354322 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354329 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354337 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.354345 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.354352 | orchestrator | 2025-09-10 00:56:09.354360 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-10 00:56:09.354368 | orchestrator | Wednesday 10 September 2025 00:46:16 +0000 (0:00:00.602) 0:01:31.274 *** 2025-09-10 00:56:09.354376 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354397 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354405 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354413 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354420 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.354428 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.354436 | orchestrator | 2025-09-10 00:56:09.354443 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-10 00:56:09.354451 | orchestrator | Wednesday 10 September 2025 00:46:17 +0000 (0:00:00.778) 0:01:32.053 *** 2025-09-10 00:56:09.354459 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354471 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354478 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354486 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354494 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.354502 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.354509 | orchestrator | 2025-09-10 00:56:09.354517 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-10 00:56:09.354525 | orchestrator | Wednesday 10 September 2025 00:46:17 +0000 (0:00:00.585) 0:01:32.639 *** 2025-09-10 00:56:09.354533 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.354541 | orchestrator | 2025-09-10 00:56:09.354548 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-10 00:56:09.354556 | orchestrator | Wednesday 10 September 2025 00:46:19 +0000 (0:00:01.175) 0:01:33.815 *** 2025-09-10 00:56:09.354564 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.354572 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.354579 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.354587 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.354595 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.354602 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.354610 | orchestrator | 2025-09-10 00:56:09.354618 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-10 00:56:09.354625 | orchestrator | Wednesday 10 September 2025 00:47:20 +0000 (0:01:01.278) 0:02:35.093 *** 2025-09-10 00:56:09.354633 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-10 00:56:09.354649 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-10 00:56:09.354657 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-10 00:56:09.354664 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354672 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-10 00:56:09.354680 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-10 00:56:09.354687 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-10 00:56:09.354695 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354703 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-10 00:56:09.354710 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-10 00:56:09.354718 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-10 00:56:09.354726 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354734 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-10 00:56:09.354741 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-10 00:56:09.354749 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-10 00:56:09.354757 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354765 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-10 00:56:09.354773 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-10 00:56:09.354780 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-10 00:56:09.354788 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.354796 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-10 00:56:09.354816 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-10 00:56:09.354824 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-10 00:56:09.354832 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.354840 | orchestrator | 2025-09-10 00:56:09.354847 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-10 00:56:09.354855 | orchestrator | Wednesday 10 September 2025 00:47:21 +0000 (0:00:00.609) 0:02:35.703 *** 2025-09-10 00:56:09.354863 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354870 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354878 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354886 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354893 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.354901 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.354909 | orchestrator | 2025-09-10 00:56:09.354917 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-10 00:56:09.354925 | orchestrator | Wednesday 10 September 2025 00:47:21 +0000 (0:00:00.699) 0:02:36.403 *** 2025-09-10 00:56:09.354932 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354940 | orchestrator | 2025-09-10 00:56:09.354948 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-10 00:56:09.354955 | orchestrator | Wednesday 10 September 2025 00:47:21 +0000 (0:00:00.145) 0:02:36.549 *** 2025-09-10 00:56:09.354963 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.354971 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.354978 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.354986 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.354993 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355001 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355009 | orchestrator | 2025-09-10 00:56:09.355016 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-10 00:56:09.355030 | orchestrator | Wednesday 10 September 2025 00:47:22 +0000 (0:00:00.549) 0:02:37.099 *** 2025-09-10 00:56:09.355037 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355045 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355053 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355060 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355071 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355079 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355087 | orchestrator | 2025-09-10 00:56:09.355095 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-10 00:56:09.355102 | orchestrator | Wednesday 10 September 2025 00:47:23 +0000 (0:00:00.810) 0:02:37.909 *** 2025-09-10 00:56:09.355110 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355118 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355125 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355133 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355140 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355148 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355156 | orchestrator | 2025-09-10 00:56:09.355163 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-10 00:56:09.355171 | orchestrator | Wednesday 10 September 2025 00:47:23 +0000 (0:00:00.725) 0:02:38.635 *** 2025-09-10 00:56:09.355179 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.355187 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.355194 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.355202 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.355210 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.355217 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.355225 | orchestrator | 2025-09-10 00:56:09.355233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-10 00:56:09.355241 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:03.156) 0:02:41.792 *** 2025-09-10 00:56:09.355248 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.355256 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.355263 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.355271 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.355279 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.355286 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.355294 | orchestrator | 2025-09-10 00:56:09.355302 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-10 00:56:09.355309 | orchestrator | Wednesday 10 September 2025 00:47:27 +0000 (0:00:00.806) 0:02:42.598 *** 2025-09-10 00:56:09.355317 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.355326 | orchestrator | 2025-09-10 00:56:09.355334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-10 00:56:09.355341 | orchestrator | Wednesday 10 September 2025 00:47:29 +0000 (0:00:01.720) 0:02:44.319 *** 2025-09-10 00:56:09.355349 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355357 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355365 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355372 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355380 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355425 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355433 | orchestrator | 2025-09-10 00:56:09.355441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-10 00:56:09.355449 | orchestrator | Wednesday 10 September 2025 00:47:30 +0000 (0:00:00.598) 0:02:44.917 *** 2025-09-10 00:56:09.355457 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355464 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355472 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355479 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355486 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355497 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355503 | orchestrator | 2025-09-10 00:56:09.355510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-10 00:56:09.355516 | orchestrator | Wednesday 10 September 2025 00:47:31 +0000 (0:00:00.937) 0:02:45.855 *** 2025-09-10 00:56:09.355523 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355529 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355536 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355542 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355549 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355560 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355566 | orchestrator | 2025-09-10 00:56:09.355573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-10 00:56:09.355580 | orchestrator | Wednesday 10 September 2025 00:47:31 +0000 (0:00:00.642) 0:02:46.497 *** 2025-09-10 00:56:09.355586 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355593 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355599 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355606 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355612 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355619 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355625 | orchestrator | 2025-09-10 00:56:09.355631 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-10 00:56:09.355638 | orchestrator | Wednesday 10 September 2025 00:47:32 +0000 (0:00:00.873) 0:02:47.371 *** 2025-09-10 00:56:09.355644 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355651 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355658 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355664 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355671 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355677 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355684 | orchestrator | 2025-09-10 00:56:09.355690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-10 00:56:09.355697 | orchestrator | Wednesday 10 September 2025 00:47:33 +0000 (0:00:00.869) 0:02:48.240 *** 2025-09-10 00:56:09.355703 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355710 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355716 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355723 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355729 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355736 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355742 | orchestrator | 2025-09-10 00:56:09.355749 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-10 00:56:09.355755 | orchestrator | Wednesday 10 September 2025 00:47:34 +0000 (0:00:01.034) 0:02:49.274 *** 2025-09-10 00:56:09.355762 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355768 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355778 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355785 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355792 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355798 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355805 | orchestrator | 2025-09-10 00:56:09.355811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-10 00:56:09.355818 | orchestrator | Wednesday 10 September 2025 00:47:35 +0000 (0:00:00.657) 0:02:49.932 *** 2025-09-10 00:56:09.355825 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.355831 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.355838 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.355844 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.355851 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.355857 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.355864 | orchestrator | 2025-09-10 00:56:09.355870 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-10 00:56:09.355881 | orchestrator | Wednesday 10 September 2025 00:47:36 +0000 (0:00:01.012) 0:02:50.945 *** 2025-09-10 00:56:09.355888 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.355894 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.355901 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.355907 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.355914 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.355920 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.355927 | orchestrator | 2025-09-10 00:56:09.355933 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-10 00:56:09.355940 | orchestrator | Wednesday 10 September 2025 00:47:37 +0000 (0:00:01.271) 0:02:52.216 *** 2025-09-10 00:56:09.355946 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.355953 | orchestrator | 2025-09-10 00:56:09.355960 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-10 00:56:09.355966 | orchestrator | Wednesday 10 September 2025 00:47:38 +0000 (0:00:01.200) 0:02:53.416 *** 2025-09-10 00:56:09.355973 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-10 00:56:09.355979 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-10 00:56:09.355986 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-10 00:56:09.355992 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-10 00:56:09.355999 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-10 00:56:09.356006 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-10 00:56:09.356012 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-10 00:56:09.356019 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-10 00:56:09.356025 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-10 00:56:09.356032 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-10 00:56:09.356038 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-10 00:56:09.356045 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-10 00:56:09.356051 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-10 00:56:09.356058 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-10 00:56:09.356064 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-10 00:56:09.356071 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-10 00:56:09.356077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-10 00:56:09.356084 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-10 00:56:09.356091 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-10 00:56:09.356097 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-10 00:56:09.356107 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-10 00:56:09.356113 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-10 00:56:09.356120 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-10 00:56:09.356126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-10 00:56:09.356133 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-10 00:56:09.356139 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-10 00:56:09.356146 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-10 00:56:09.356152 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-10 00:56:09.356159 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-10 00:56:09.356165 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-10 00:56:09.356172 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-10 00:56:09.356186 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-10 00:56:09.356193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-10 00:56:09.356199 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-10 00:56:09.356205 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-10 00:56:09.356212 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-10 00:56:09.356218 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-10 00:56:09.356225 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-10 00:56:09.356231 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-10 00:56:09.356238 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-10 00:56:09.356244 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-10 00:56:09.356251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-10 00:56:09.356261 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-10 00:56:09.356268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-10 00:56:09.356274 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-10 00:56:09.356280 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-10 00:56:09.356287 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-10 00:56:09.356293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-10 00:56:09.356300 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-10 00:56:09.356306 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-10 00:56:09.356313 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-10 00:56:09.356320 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-10 00:56:09.356326 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-10 00:56:09.356332 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-10 00:56:09.356339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-10 00:56:09.356345 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-10 00:56:09.356352 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-10 00:56:09.356358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-10 00:56:09.356365 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-10 00:56:09.356371 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-10 00:56:09.356378 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-10 00:56:09.356398 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-10 00:56:09.356405 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-10 00:56:09.356411 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-10 00:56:09.356418 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-10 00:56:09.356425 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-10 00:56:09.356431 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-10 00:56:09.356437 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-10 00:56:09.356444 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-10 00:56:09.356450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-10 00:56:09.356457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-10 00:56:09.356463 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-10 00:56:09.356474 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-10 00:56:09.356481 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-10 00:56:09.356487 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-10 00:56:09.356493 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-10 00:56:09.356500 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-10 00:56:09.356506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-10 00:56:09.356517 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-10 00:56:09.356524 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-10 00:56:09.356530 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-10 00:56:09.356537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-10 00:56:09.356543 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-10 00:56:09.356550 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-10 00:56:09.356556 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-10 00:56:09.356563 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-10 00:56:09.356569 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-10 00:56:09.356576 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-10 00:56:09.356582 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-10 00:56:09.356589 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-10 00:56:09.356595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-10 00:56:09.356602 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-10 00:56:09.356608 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-10 00:56:09.356615 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-10 00:56:09.356622 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-10 00:56:09.356628 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-10 00:56:09.356635 | orchestrator | 2025-09-10 00:56:09.356641 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-10 00:56:09.356648 | orchestrator | Wednesday 10 September 2025 00:47:45 +0000 (0:00:06.955) 0:03:00.372 *** 2025-09-10 00:56:09.356654 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.356661 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.356671 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.356678 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.356685 | orchestrator | 2025-09-10 00:56:09.356691 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-10 00:56:09.356698 | orchestrator | Wednesday 10 September 2025 00:47:46 +0000 (0:00:01.189) 0:03:01.562 *** 2025-09-10 00:56:09.356705 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.356712 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.356718 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.356725 | orchestrator | 2025-09-10 00:56:09.356731 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-10 00:56:09.356738 | orchestrator | Wednesday 10 September 2025 00:47:47 +0000 (0:00:00.903) 0:03:02.465 *** 2025-09-10 00:56:09.356745 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.356757 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.356763 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.356770 | orchestrator | 2025-09-10 00:56:09.356777 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-10 00:56:09.356783 | orchestrator | Wednesday 10 September 2025 00:47:49 +0000 (0:00:01.731) 0:03:04.196 *** 2025-09-10 00:56:09.356790 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.356796 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.356803 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.356809 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.356816 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.356822 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.356829 | orchestrator | 2025-09-10 00:56:09.356835 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-10 00:56:09.356842 | orchestrator | Wednesday 10 September 2025 00:47:50 +0000 (0:00:00.840) 0:03:05.037 *** 2025-09-10 00:56:09.356849 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.356855 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.356862 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.356868 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.356875 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.356881 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.356888 | orchestrator | 2025-09-10 00:56:09.356894 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-10 00:56:09.356901 | orchestrator | Wednesday 10 September 2025 00:47:51 +0000 (0:00:00.851) 0:03:05.888 *** 2025-09-10 00:56:09.356907 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.356914 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.356920 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.356927 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.356933 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.356940 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.356946 | orchestrator | 2025-09-10 00:56:09.356953 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-10 00:56:09.356960 | orchestrator | Wednesday 10 September 2025 00:47:51 +0000 (0:00:00.662) 0:03:06.551 *** 2025-09-10 00:56:09.356970 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.356976 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.356983 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.356990 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.356996 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357003 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357009 | orchestrator | 2025-09-10 00:56:09.357016 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-10 00:56:09.357022 | orchestrator | Wednesday 10 September 2025 00:47:52 +0000 (0:00:00.671) 0:03:07.222 *** 2025-09-10 00:56:09.357029 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357036 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357042 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357049 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357055 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357062 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357068 | orchestrator | 2025-09-10 00:56:09.357075 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-10 00:56:09.357081 | orchestrator | Wednesday 10 September 2025 00:47:53 +0000 (0:00:00.756) 0:03:07.979 *** 2025-09-10 00:56:09.357088 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357094 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357101 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357112 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357119 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357125 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357132 | orchestrator | 2025-09-10 00:56:09.357138 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-10 00:56:09.357145 | orchestrator | Wednesday 10 September 2025 00:47:53 +0000 (0:00:00.534) 0:03:08.513 *** 2025-09-10 00:56:09.357152 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357158 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357165 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357171 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357178 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357185 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357191 | orchestrator | 2025-09-10 00:56:09.357201 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-10 00:56:09.357208 | orchestrator | Wednesday 10 September 2025 00:47:54 +0000 (0:00:00.778) 0:03:09.292 *** 2025-09-10 00:56:09.357214 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357221 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357227 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357234 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357241 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357247 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357254 | orchestrator | 2025-09-10 00:56:09.357260 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-10 00:56:09.357267 | orchestrator | Wednesday 10 September 2025 00:47:55 +0000 (0:00:00.894) 0:03:10.186 *** 2025-09-10 00:56:09.357273 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357280 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357287 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357293 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.357300 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.357306 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.357313 | orchestrator | 2025-09-10 00:56:09.357320 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-10 00:56:09.357326 | orchestrator | Wednesday 10 September 2025 00:47:58 +0000 (0:00:03.154) 0:03:13.340 *** 2025-09-10 00:56:09.357333 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.357339 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.357346 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.357353 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357359 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357366 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357372 | orchestrator | 2025-09-10 00:56:09.357379 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-10 00:56:09.357400 | orchestrator | Wednesday 10 September 2025 00:47:59 +0000 (0:00:00.691) 0:03:14.032 *** 2025-09-10 00:56:09.357407 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.357413 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.357420 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.357426 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357433 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357439 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357446 | orchestrator | 2025-09-10 00:56:09.357453 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-10 00:56:09.357459 | orchestrator | Wednesday 10 September 2025 00:48:00 +0000 (0:00:01.280) 0:03:15.313 *** 2025-09-10 00:56:09.357466 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357472 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357479 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357485 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357492 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357503 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357510 | orchestrator | 2025-09-10 00:56:09.357517 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-10 00:56:09.357523 | orchestrator | Wednesday 10 September 2025 00:48:01 +0000 (0:00:00.900) 0:03:16.213 *** 2025-09-10 00:56:09.357530 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.357536 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.357543 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.357550 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357556 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357563 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357569 | orchestrator | 2025-09-10 00:56:09.357580 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-10 00:56:09.357586 | orchestrator | Wednesday 10 September 2025 00:48:02 +0000 (0:00:01.034) 0:03:17.248 *** 2025-09-10 00:56:09.357594 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-10 00:56:09.357602 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-10 00:56:09.357608 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-10 00:56:09.357615 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357622 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-10 00:56:09.357632 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-10 00:56:09.357639 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-10 00:56:09.357645 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357652 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357658 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357665 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357671 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357678 | orchestrator | 2025-09-10 00:56:09.357684 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-10 00:56:09.357691 | orchestrator | Wednesday 10 September 2025 00:48:03 +0000 (0:00:00.813) 0:03:18.061 *** 2025-09-10 00:56:09.357698 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357704 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357717 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357724 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357730 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357737 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357743 | orchestrator | 2025-09-10 00:56:09.357750 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-10 00:56:09.357756 | orchestrator | Wednesday 10 September 2025 00:48:04 +0000 (0:00:01.083) 0:03:19.145 *** 2025-09-10 00:56:09.357763 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357769 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357776 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357782 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357789 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357795 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357802 | orchestrator | 2025-09-10 00:56:09.357808 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-10 00:56:09.357815 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.514) 0:03:19.659 *** 2025-09-10 00:56:09.357822 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357828 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357835 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357841 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357848 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357854 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357861 | orchestrator | 2025-09-10 00:56:09.357867 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-10 00:56:09.357874 | orchestrator | Wednesday 10 September 2025 00:48:05 +0000 (0:00:00.806) 0:03:20.466 *** 2025-09-10 00:56:09.357880 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357887 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357893 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357900 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357906 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357913 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357919 | orchestrator | 2025-09-10 00:56:09.357926 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-10 00:56:09.357933 | orchestrator | Wednesday 10 September 2025 00:48:06 +0000 (0:00:00.665) 0:03:21.131 *** 2025-09-10 00:56:09.357939 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.357949 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.357956 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.357963 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.357969 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.357976 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.357982 | orchestrator | 2025-09-10 00:56:09.357989 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-10 00:56:09.357995 | orchestrator | Wednesday 10 September 2025 00:48:07 +0000 (0:00:00.824) 0:03:21.956 *** 2025-09-10 00:56:09.358002 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.358008 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.358039 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.358048 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.358054 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.358061 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.358068 | orchestrator | 2025-09-10 00:56:09.358075 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-10 00:56:09.358081 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.893) 0:03:22.850 *** 2025-09-10 00:56:09.358088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.358095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.358101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.358112 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358119 | orchestrator | 2025-09-10 00:56:09.358125 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-10 00:56:09.358132 | orchestrator | Wednesday 10 September 2025 00:48:08 +0000 (0:00:00.462) 0:03:23.312 *** 2025-09-10 00:56:09.358139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.358145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.358151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.358158 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358164 | orchestrator | 2025-09-10 00:56:09.358171 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-10 00:56:09.358181 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:00.504) 0:03:23.817 *** 2025-09-10 00:56:09.358188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.358194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.358201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.358207 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358214 | orchestrator | 2025-09-10 00:56:09.358220 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-10 00:56:09.358227 | orchestrator | Wednesday 10 September 2025 00:48:09 +0000 (0:00:00.728) 0:03:24.545 *** 2025-09-10 00:56:09.358233 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.358240 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.358247 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.358253 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.358260 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.358266 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.358273 | orchestrator | 2025-09-10 00:56:09.358279 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-10 00:56:09.358286 | orchestrator | Wednesday 10 September 2025 00:48:10 +0000 (0:00:00.718) 0:03:25.264 *** 2025-09-10 00:56:09.358293 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-10 00:56:09.358299 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-10 00:56:09.358306 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.358312 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-10 00:56:09.358319 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.358325 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-10 00:56:09.358332 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.358339 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-10 00:56:09.358345 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-10 00:56:09.358351 | orchestrator | 2025-09-10 00:56:09.358358 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-10 00:56:09.358365 | orchestrator | Wednesday 10 September 2025 00:48:12 +0000 (0:00:01.888) 0:03:27.153 *** 2025-09-10 00:56:09.358371 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.358378 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.358415 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.358422 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.358429 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.358435 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.358442 | orchestrator | 2025-09-10 00:56:09.358449 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-10 00:56:09.358455 | orchestrator | Wednesday 10 September 2025 00:48:15 +0000 (0:00:03.326) 0:03:30.480 *** 2025-09-10 00:56:09.358462 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.358468 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.358475 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.358481 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.358488 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.358494 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.358506 | orchestrator | 2025-09-10 00:56:09.358513 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-10 00:56:09.358520 | orchestrator | Wednesday 10 September 2025 00:48:17 +0000 (0:00:01.230) 0:03:31.710 *** 2025-09-10 00:56:09.358526 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358532 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.358538 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.358544 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.358551 | orchestrator | 2025-09-10 00:56:09.358557 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-10 00:56:09.358563 | orchestrator | Wednesday 10 September 2025 00:48:18 +0000 (0:00:01.076) 0:03:32.786 *** 2025-09-10 00:56:09.358569 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.358575 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.358581 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.358587 | orchestrator | 2025-09-10 00:56:09.358603 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-10 00:56:09.358610 | orchestrator | Wednesday 10 September 2025 00:48:18 +0000 (0:00:00.345) 0:03:33.132 *** 2025-09-10 00:56:09.358616 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.358622 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.358628 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.358634 | orchestrator | 2025-09-10 00:56:09.358641 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-10 00:56:09.358647 | orchestrator | Wednesday 10 September 2025 00:48:19 +0000 (0:00:01.256) 0:03:34.389 *** 2025-09-10 00:56:09.358653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-10 00:56:09.358659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-10 00:56:09.358665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-10 00:56:09.358672 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.358678 | orchestrator | 2025-09-10 00:56:09.358684 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-10 00:56:09.358690 | orchestrator | Wednesday 10 September 2025 00:48:20 +0000 (0:00:00.893) 0:03:35.282 *** 2025-09-10 00:56:09.358696 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.358702 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.358708 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.358714 | orchestrator | 2025-09-10 00:56:09.358721 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-10 00:56:09.358727 | orchestrator | Wednesday 10 September 2025 00:48:20 +0000 (0:00:00.310) 0:03:35.593 *** 2025-09-10 00:56:09.358733 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.358739 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.358745 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.358751 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.358758 | orchestrator | 2025-09-10 00:56:09.358764 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-10 00:56:09.358773 | orchestrator | Wednesday 10 September 2025 00:48:22 +0000 (0:00:01.215) 0:03:36.809 *** 2025-09-10 00:56:09.358780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.358786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.358792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.358798 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358804 | orchestrator | 2025-09-10 00:56:09.358810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-10 00:56:09.358817 | orchestrator | Wednesday 10 September 2025 00:48:22 +0000 (0:00:00.633) 0:03:37.442 *** 2025-09-10 00:56:09.358823 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358829 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.358840 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.358846 | orchestrator | 2025-09-10 00:56:09.358852 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-10 00:56:09.358858 | orchestrator | Wednesday 10 September 2025 00:48:23 +0000 (0:00:00.646) 0:03:38.088 *** 2025-09-10 00:56:09.358864 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358870 | orchestrator | 2025-09-10 00:56:09.358877 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-10 00:56:09.358883 | orchestrator | Wednesday 10 September 2025 00:48:23 +0000 (0:00:00.215) 0:03:38.304 *** 2025-09-10 00:56:09.358889 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358895 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.358901 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.358907 | orchestrator | 2025-09-10 00:56:09.358913 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-10 00:56:09.358919 | orchestrator | Wednesday 10 September 2025 00:48:24 +0000 (0:00:00.688) 0:03:38.993 *** 2025-09-10 00:56:09.358926 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358932 | orchestrator | 2025-09-10 00:56:09.358938 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-10 00:56:09.358944 | orchestrator | Wednesday 10 September 2025 00:48:24 +0000 (0:00:00.381) 0:03:39.374 *** 2025-09-10 00:56:09.358950 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358956 | orchestrator | 2025-09-10 00:56:09.358963 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-10 00:56:09.358969 | orchestrator | Wednesday 10 September 2025 00:48:24 +0000 (0:00:00.265) 0:03:39.640 *** 2025-09-10 00:56:09.358975 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.358981 | orchestrator | 2025-09-10 00:56:09.358987 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-10 00:56:09.358993 | orchestrator | Wednesday 10 September 2025 00:48:25 +0000 (0:00:00.129) 0:03:39.769 *** 2025-09-10 00:56:09.358999 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359006 | orchestrator | 2025-09-10 00:56:09.359012 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-10 00:56:09.359018 | orchestrator | Wednesday 10 September 2025 00:48:25 +0000 (0:00:00.208) 0:03:39.977 *** 2025-09-10 00:56:09.359024 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359030 | orchestrator | 2025-09-10 00:56:09.359036 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-10 00:56:09.359043 | orchestrator | Wednesday 10 September 2025 00:48:25 +0000 (0:00:00.180) 0:03:40.157 *** 2025-09-10 00:56:09.359049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.359055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.359061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.359067 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359073 | orchestrator | 2025-09-10 00:56:09.359079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-10 00:56:09.359086 | orchestrator | Wednesday 10 September 2025 00:48:25 +0000 (0:00:00.367) 0:03:40.525 *** 2025-09-10 00:56:09.359092 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359101 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.359107 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.359113 | orchestrator | 2025-09-10 00:56:09.359119 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-10 00:56:09.359126 | orchestrator | Wednesday 10 September 2025 00:48:26 +0000 (0:00:00.778) 0:03:41.303 *** 2025-09-10 00:56:09.359132 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359138 | orchestrator | 2025-09-10 00:56:09.359144 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-10 00:56:09.359150 | orchestrator | Wednesday 10 September 2025 00:48:27 +0000 (0:00:00.367) 0:03:41.670 *** 2025-09-10 00:56:09.359156 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359166 | orchestrator | 2025-09-10 00:56:09.359172 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-10 00:56:09.359178 | orchestrator | Wednesday 10 September 2025 00:48:27 +0000 (0:00:00.194) 0:03:41.865 *** 2025-09-10 00:56:09.359184 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.359190 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.359196 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.359202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.359209 | orchestrator | 2025-09-10 00:56:09.359215 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-10 00:56:09.359221 | orchestrator | Wednesday 10 September 2025 00:48:28 +0000 (0:00:00.893) 0:03:42.758 *** 2025-09-10 00:56:09.359227 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.359233 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.359239 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.359245 | orchestrator | 2025-09-10 00:56:09.359251 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-10 00:56:09.359257 | orchestrator | Wednesday 10 September 2025 00:48:28 +0000 (0:00:00.475) 0:03:43.233 *** 2025-09-10 00:56:09.359263 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.359269 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.359275 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.359281 | orchestrator | 2025-09-10 00:56:09.359290 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-10 00:56:09.359297 | orchestrator | Wednesday 10 September 2025 00:48:30 +0000 (0:00:01.932) 0:03:45.165 *** 2025-09-10 00:56:09.359303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.359309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.359315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.359321 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359327 | orchestrator | 2025-09-10 00:56:09.359333 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-10 00:56:09.359339 | orchestrator | Wednesday 10 September 2025 00:48:31 +0000 (0:00:00.568) 0:03:45.734 *** 2025-09-10 00:56:09.359345 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.359351 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.359357 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.359363 | orchestrator | 2025-09-10 00:56:09.359369 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-10 00:56:09.359375 | orchestrator | Wednesday 10 September 2025 00:48:31 +0000 (0:00:00.577) 0:03:46.312 *** 2025-09-10 00:56:09.359394 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.359401 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.359407 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.359413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.359419 | orchestrator | 2025-09-10 00:56:09.359425 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-10 00:56:09.359431 | orchestrator | Wednesday 10 September 2025 00:48:32 +0000 (0:00:01.091) 0:03:47.404 *** 2025-09-10 00:56:09.359437 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.359443 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.359449 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.359455 | orchestrator | 2025-09-10 00:56:09.359461 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-10 00:56:09.359467 | orchestrator | Wednesday 10 September 2025 00:48:33 +0000 (0:00:00.257) 0:03:47.662 *** 2025-09-10 00:56:09.359474 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.359480 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.359486 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.359492 | orchestrator | 2025-09-10 00:56:09.359498 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-10 00:56:09.359508 | orchestrator | Wednesday 10 September 2025 00:48:34 +0000 (0:00:01.324) 0:03:48.987 *** 2025-09-10 00:56:09.359515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.359521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.359527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.359533 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359539 | orchestrator | 2025-09-10 00:56:09.359545 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-10 00:56:09.359551 | orchestrator | Wednesday 10 September 2025 00:48:34 +0000 (0:00:00.650) 0:03:49.637 *** 2025-09-10 00:56:09.359557 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.359563 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.359569 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.359575 | orchestrator | 2025-09-10 00:56:09.359581 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-10 00:56:09.359587 | orchestrator | Wednesday 10 September 2025 00:48:35 +0000 (0:00:00.368) 0:03:50.005 *** 2025-09-10 00:56:09.359593 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359599 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.359605 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.359611 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.359617 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.359624 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.359630 | orchestrator | 2025-09-10 00:56:09.359636 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-10 00:56:09.359646 | orchestrator | Wednesday 10 September 2025 00:48:36 +0000 (0:00:00.670) 0:03:50.676 *** 2025-09-10 00:56:09.359652 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.359658 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.359664 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.359670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.359677 | orchestrator | 2025-09-10 00:56:09.359683 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-10 00:56:09.359689 | orchestrator | Wednesday 10 September 2025 00:48:37 +0000 (0:00:01.416) 0:03:52.093 *** 2025-09-10 00:56:09.359695 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.359701 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.359707 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.359713 | orchestrator | 2025-09-10 00:56:09.359719 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-10 00:56:09.359725 | orchestrator | Wednesday 10 September 2025 00:48:37 +0000 (0:00:00.267) 0:03:52.360 *** 2025-09-10 00:56:09.359731 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.359737 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.359743 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.359749 | orchestrator | 2025-09-10 00:56:09.359755 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-10 00:56:09.359761 | orchestrator | Wednesday 10 September 2025 00:48:39 +0000 (0:00:01.446) 0:03:53.806 *** 2025-09-10 00:56:09.359767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-10 00:56:09.359773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-10 00:56:09.359779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-10 00:56:09.359785 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.359791 | orchestrator | 2025-09-10 00:56:09.359797 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-10 00:56:09.359803 | orchestrator | Wednesday 10 September 2025 00:48:39 +0000 (0:00:00.567) 0:03:54.373 *** 2025-09-10 00:56:09.359809 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.359818 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.359825 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.359835 | orchestrator | 2025-09-10 00:56:09.359841 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-10 00:56:09.359847 | orchestrator | 2025-09-10 00:56:09.359853 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.359859 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:00.604) 0:03:54.977 *** 2025-09-10 00:56:09.359865 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.359871 | orchestrator | 2025-09-10 00:56:09.359877 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.359883 | orchestrator | Wednesday 10 September 2025 00:48:40 +0000 (0:00:00.622) 0:03:55.599 *** 2025-09-10 00:56:09.359889 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.359895 | orchestrator | 2025-09-10 00:56:09.359901 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.359907 | orchestrator | Wednesday 10 September 2025 00:48:41 +0000 (0:00:00.557) 0:03:56.157 *** 2025-09-10 00:56:09.359913 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.359919 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.359925 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.359931 | orchestrator | 2025-09-10 00:56:09.359938 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.359944 | orchestrator | Wednesday 10 September 2025 00:48:42 +0000 (0:00:00.696) 0:03:56.854 *** 2025-09-10 00:56:09.359950 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.359956 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.359962 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.359968 | orchestrator | 2025-09-10 00:56:09.359974 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.359980 | orchestrator | Wednesday 10 September 2025 00:48:42 +0000 (0:00:00.304) 0:03:57.159 *** 2025-09-10 00:56:09.359986 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.359992 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.359998 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360004 | orchestrator | 2025-09-10 00:56:09.360010 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.360016 | orchestrator | Wednesday 10 September 2025 00:48:43 +0000 (0:00:00.622) 0:03:57.781 *** 2025-09-10 00:56:09.360022 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360028 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360034 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360040 | orchestrator | 2025-09-10 00:56:09.360046 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.360052 | orchestrator | Wednesday 10 September 2025 00:48:43 +0000 (0:00:00.430) 0:03:58.212 *** 2025-09-10 00:56:09.360059 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360064 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360070 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360077 | orchestrator | 2025-09-10 00:56:09.360083 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.360089 | orchestrator | Wednesday 10 September 2025 00:48:44 +0000 (0:00:00.886) 0:03:59.098 *** 2025-09-10 00:56:09.360095 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360101 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360107 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360113 | orchestrator | 2025-09-10 00:56:09.360119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.360125 | orchestrator | Wednesday 10 September 2025 00:48:44 +0000 (0:00:00.402) 0:03:59.501 *** 2025-09-10 00:56:09.360132 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360138 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360144 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360154 | orchestrator | 2025-09-10 00:56:09.360163 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.360169 | orchestrator | Wednesday 10 September 2025 00:48:45 +0000 (0:00:00.865) 0:04:00.367 *** 2025-09-10 00:56:09.360175 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360182 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360188 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360194 | orchestrator | 2025-09-10 00:56:09.360200 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.360206 | orchestrator | Wednesday 10 September 2025 00:48:46 +0000 (0:00:01.101) 0:04:01.468 *** 2025-09-10 00:56:09.360212 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360218 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360224 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360230 | orchestrator | 2025-09-10 00:56:09.360236 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.360242 | orchestrator | Wednesday 10 September 2025 00:48:47 +0000 (0:00:00.807) 0:04:02.276 *** 2025-09-10 00:56:09.360248 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360255 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360261 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360267 | orchestrator | 2025-09-10 00:56:09.360273 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.360279 | orchestrator | Wednesday 10 September 2025 00:48:47 +0000 (0:00:00.301) 0:04:02.577 *** 2025-09-10 00:56:09.360285 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360291 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360297 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360303 | orchestrator | 2025-09-10 00:56:09.360309 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.360315 | orchestrator | Wednesday 10 September 2025 00:48:48 +0000 (0:00:00.565) 0:04:03.143 *** 2025-09-10 00:56:09.360322 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360328 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360334 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360340 | orchestrator | 2025-09-10 00:56:09.360346 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.360355 | orchestrator | Wednesday 10 September 2025 00:48:48 +0000 (0:00:00.288) 0:04:03.431 *** 2025-09-10 00:56:09.360361 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360368 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360374 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360379 | orchestrator | 2025-09-10 00:56:09.360411 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.360417 | orchestrator | Wednesday 10 September 2025 00:48:49 +0000 (0:00:00.284) 0:04:03.716 *** 2025-09-10 00:56:09.360423 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360430 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360436 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360442 | orchestrator | 2025-09-10 00:56:09.360448 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.360453 | orchestrator | Wednesday 10 September 2025 00:48:49 +0000 (0:00:00.268) 0:04:03.984 *** 2025-09-10 00:56:09.360459 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360464 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360470 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360475 | orchestrator | 2025-09-10 00:56:09.360480 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.360486 | orchestrator | Wednesday 10 September 2025 00:48:49 +0000 (0:00:00.472) 0:04:04.456 *** 2025-09-10 00:56:09.360491 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360496 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.360502 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.360507 | orchestrator | 2025-09-10 00:56:09.360512 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.360522 | orchestrator | Wednesday 10 September 2025 00:48:50 +0000 (0:00:00.317) 0:04:04.774 *** 2025-09-10 00:56:09.360528 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360533 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360538 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360544 | orchestrator | 2025-09-10 00:56:09.360549 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.360554 | orchestrator | Wednesday 10 September 2025 00:48:50 +0000 (0:00:00.304) 0:04:05.078 *** 2025-09-10 00:56:09.360560 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360565 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360570 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360576 | orchestrator | 2025-09-10 00:56:09.360581 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.360586 | orchestrator | Wednesday 10 September 2025 00:48:50 +0000 (0:00:00.336) 0:04:05.415 *** 2025-09-10 00:56:09.360592 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360597 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360602 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360607 | orchestrator | 2025-09-10 00:56:09.360613 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-10 00:56:09.360618 | orchestrator | Wednesday 10 September 2025 00:48:51 +0000 (0:00:00.803) 0:04:06.218 *** 2025-09-10 00:56:09.360623 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360629 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360634 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360639 | orchestrator | 2025-09-10 00:56:09.360644 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-10 00:56:09.360650 | orchestrator | Wednesday 10 September 2025 00:48:51 +0000 (0:00:00.350) 0:04:06.569 *** 2025-09-10 00:56:09.360655 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.360660 | orchestrator | 2025-09-10 00:56:09.360666 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-10 00:56:09.360671 | orchestrator | Wednesday 10 September 2025 00:48:52 +0000 (0:00:00.575) 0:04:07.144 *** 2025-09-10 00:56:09.360676 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.360682 | orchestrator | 2025-09-10 00:56:09.360687 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-10 00:56:09.360696 | orchestrator | Wednesday 10 September 2025 00:48:52 +0000 (0:00:00.450) 0:04:07.595 *** 2025-09-10 00:56:09.360702 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-10 00:56:09.360707 | orchestrator | 2025-09-10 00:56:09.360713 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-10 00:56:09.360718 | orchestrator | Wednesday 10 September 2025 00:48:54 +0000 (0:00:01.106) 0:04:08.701 *** 2025-09-10 00:56:09.360723 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360728 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360734 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360739 | orchestrator | 2025-09-10 00:56:09.360744 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-10 00:56:09.360750 | orchestrator | Wednesday 10 September 2025 00:48:54 +0000 (0:00:00.373) 0:04:09.075 *** 2025-09-10 00:56:09.360755 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360760 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360765 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360771 | orchestrator | 2025-09-10 00:56:09.360776 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-10 00:56:09.360781 | orchestrator | Wednesday 10 September 2025 00:48:54 +0000 (0:00:00.394) 0:04:09.469 *** 2025-09-10 00:56:09.360786 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.360792 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.360797 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.360802 | orchestrator | 2025-09-10 00:56:09.360808 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-10 00:56:09.360817 | orchestrator | Wednesday 10 September 2025 00:48:56 +0000 (0:00:01.271) 0:04:10.741 *** 2025-09-10 00:56:09.360822 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.360827 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.360833 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.360838 | orchestrator | 2025-09-10 00:56:09.360843 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-10 00:56:09.360849 | orchestrator | Wednesday 10 September 2025 00:48:57 +0000 (0:00:01.087) 0:04:11.828 *** 2025-09-10 00:56:09.360854 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.360860 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.360865 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.360870 | orchestrator | 2025-09-10 00:56:09.360881 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-10 00:56:09.360887 | orchestrator | Wednesday 10 September 2025 00:48:57 +0000 (0:00:00.718) 0:04:12.547 *** 2025-09-10 00:56:09.360892 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360897 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.360903 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.360908 | orchestrator | 2025-09-10 00:56:09.360913 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-10 00:56:09.360919 | orchestrator | Wednesday 10 September 2025 00:48:58 +0000 (0:00:00.818) 0:04:13.365 *** 2025-09-10 00:56:09.360924 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.360929 | orchestrator | 2025-09-10 00:56:09.360935 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-10 00:56:09.360940 | orchestrator | Wednesday 10 September 2025 00:48:59 +0000 (0:00:01.213) 0:04:14.579 *** 2025-09-10 00:56:09.360945 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.360951 | orchestrator | 2025-09-10 00:56:09.360956 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-10 00:56:09.360961 | orchestrator | Wednesday 10 September 2025 00:49:00 +0000 (0:00:00.752) 0:04:15.331 *** 2025-09-10 00:56:09.360966 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 00:56:09.360972 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.360977 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.360982 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:56:09.360988 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-10 00:56:09.360993 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:56:09.360998 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:56:09.361004 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-10 00:56:09.361009 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-10 00:56:09.361014 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-10 00:56:09.361020 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:56:09.361025 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-10 00:56:09.361031 | orchestrator | 2025-09-10 00:56:09.361036 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-10 00:56:09.361041 | orchestrator | Wednesday 10 September 2025 00:49:04 +0000 (0:00:03.574) 0:04:18.906 *** 2025-09-10 00:56:09.361047 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361052 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361057 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361063 | orchestrator | 2025-09-10 00:56:09.361068 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-10 00:56:09.361073 | orchestrator | Wednesday 10 September 2025 00:49:05 +0000 (0:00:01.730) 0:04:20.636 *** 2025-09-10 00:56:09.361078 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361084 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.361089 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.361098 | orchestrator | 2025-09-10 00:56:09.361104 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-10 00:56:09.361109 | orchestrator | Wednesday 10 September 2025 00:49:06 +0000 (0:00:00.373) 0:04:21.009 *** 2025-09-10 00:56:09.361114 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361120 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.361125 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.361130 | orchestrator | 2025-09-10 00:56:09.361135 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-10 00:56:09.361141 | orchestrator | Wednesday 10 September 2025 00:49:06 +0000 (0:00:00.352) 0:04:21.362 *** 2025-09-10 00:56:09.361146 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361151 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361157 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361162 | orchestrator | 2025-09-10 00:56:09.361170 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-10 00:56:09.361176 | orchestrator | Wednesday 10 September 2025 00:49:08 +0000 (0:00:01.872) 0:04:23.234 *** 2025-09-10 00:56:09.361181 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361187 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361192 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361197 | orchestrator | 2025-09-10 00:56:09.361203 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-10 00:56:09.361208 | orchestrator | Wednesday 10 September 2025 00:49:10 +0000 (0:00:01.813) 0:04:25.048 *** 2025-09-10 00:56:09.361213 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361219 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361224 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361229 | orchestrator | 2025-09-10 00:56:09.361234 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-10 00:56:09.361240 | orchestrator | Wednesday 10 September 2025 00:49:10 +0000 (0:00:00.368) 0:04:25.417 *** 2025-09-10 00:56:09.361245 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.361250 | orchestrator | 2025-09-10 00:56:09.361256 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-10 00:56:09.361261 | orchestrator | Wednesday 10 September 2025 00:49:11 +0000 (0:00:00.570) 0:04:25.987 *** 2025-09-10 00:56:09.361266 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361272 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361277 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361282 | orchestrator | 2025-09-10 00:56:09.361288 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-10 00:56:09.361293 | orchestrator | Wednesday 10 September 2025 00:49:11 +0000 (0:00:00.585) 0:04:26.572 *** 2025-09-10 00:56:09.361298 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361304 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361309 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361314 | orchestrator | 2025-09-10 00:56:09.361319 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-10 00:56:09.361328 | orchestrator | Wednesday 10 September 2025 00:49:12 +0000 (0:00:00.359) 0:04:26.931 *** 2025-09-10 00:56:09.361333 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.361339 | orchestrator | 2025-09-10 00:56:09.361344 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-10 00:56:09.361349 | orchestrator | Wednesday 10 September 2025 00:49:13 +0000 (0:00:00.904) 0:04:27.835 *** 2025-09-10 00:56:09.361355 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361360 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361365 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361371 | orchestrator | 2025-09-10 00:56:09.361376 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-10 00:56:09.361396 | orchestrator | Wednesday 10 September 2025 00:49:15 +0000 (0:00:02.155) 0:04:29.991 *** 2025-09-10 00:56:09.361402 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361408 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361413 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361419 | orchestrator | 2025-09-10 00:56:09.361424 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-10 00:56:09.361429 | orchestrator | Wednesday 10 September 2025 00:49:16 +0000 (0:00:01.643) 0:04:31.635 *** 2025-09-10 00:56:09.361435 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361440 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361445 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361450 | orchestrator | 2025-09-10 00:56:09.361456 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-10 00:56:09.361461 | orchestrator | Wednesday 10 September 2025 00:49:18 +0000 (0:00:01.728) 0:04:33.363 *** 2025-09-10 00:56:09.361467 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.361472 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.361477 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.361483 | orchestrator | 2025-09-10 00:56:09.361488 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-10 00:56:09.361493 | orchestrator | Wednesday 10 September 2025 00:49:21 +0000 (0:00:02.784) 0:04:36.148 *** 2025-09-10 00:56:09.361499 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.361504 | orchestrator | 2025-09-10 00:56:09.361509 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-10 00:56:09.361515 | orchestrator | Wednesday 10 September 2025 00:49:22 +0000 (0:00:00.787) 0:04:36.935 *** 2025-09-10 00:56:09.361520 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-10 00:56:09.361525 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361531 | orchestrator | 2025-09-10 00:56:09.361536 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-10 00:56:09.361541 | orchestrator | Wednesday 10 September 2025 00:49:44 +0000 (0:00:21.886) 0:04:58.821 *** 2025-09-10 00:56:09.361547 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.361552 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361557 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.361563 | orchestrator | 2025-09-10 00:56:09.361568 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-10 00:56:09.361573 | orchestrator | Wednesday 10 September 2025 00:49:52 +0000 (0:00:08.283) 0:05:07.105 *** 2025-09-10 00:56:09.361579 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361584 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361589 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361594 | orchestrator | 2025-09-10 00:56:09.361600 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-10 00:56:09.361605 | orchestrator | Wednesday 10 September 2025 00:49:52 +0000 (0:00:00.308) 0:05:07.413 *** 2025-09-10 00:56:09.361614 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-10 00:56:09.361621 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-10 00:56:09.361627 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-10 00:56:09.361637 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-10 00:56:09.361646 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-10 00:56:09.361652 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__149b8434cca55e77a0165b1b64118cbc0ac740f9'}])  2025-09-10 00:56:09.361657 | orchestrator | 2025-09-10 00:56:09.361663 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-10 00:56:09.361668 | orchestrator | Wednesday 10 September 2025 00:50:06 +0000 (0:00:13.952) 0:05:21.366 *** 2025-09-10 00:56:09.361673 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361679 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361684 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361689 | orchestrator | 2025-09-10 00:56:09.361694 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-10 00:56:09.361700 | orchestrator | Wednesday 10 September 2025 00:50:07 +0000 (0:00:00.337) 0:05:21.704 *** 2025-09-10 00:56:09.361705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.361710 | orchestrator | 2025-09-10 00:56:09.361715 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-10 00:56:09.361721 | orchestrator | Wednesday 10 September 2025 00:50:07 +0000 (0:00:00.525) 0:05:22.229 *** 2025-09-10 00:56:09.361726 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361731 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.361736 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.361742 | orchestrator | 2025-09-10 00:56:09.361747 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-10 00:56:09.361752 | orchestrator | Wednesday 10 September 2025 00:50:08 +0000 (0:00:00.627) 0:05:22.856 *** 2025-09-10 00:56:09.361758 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361763 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361768 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361773 | orchestrator | 2025-09-10 00:56:09.361779 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-10 00:56:09.361784 | orchestrator | Wednesday 10 September 2025 00:50:08 +0000 (0:00:00.367) 0:05:23.224 *** 2025-09-10 00:56:09.361789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-10 00:56:09.361795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-10 00:56:09.361800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-10 00:56:09.361805 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361810 | orchestrator | 2025-09-10 00:56:09.361816 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-10 00:56:09.361824 | orchestrator | Wednesday 10 September 2025 00:50:09 +0000 (0:00:00.619) 0:05:23.843 *** 2025-09-10 00:56:09.361830 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361835 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.361840 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.361846 | orchestrator | 2025-09-10 00:56:09.361854 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-10 00:56:09.361859 | orchestrator | 2025-09-10 00:56:09.361865 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.361870 | orchestrator | Wednesday 10 September 2025 00:50:10 +0000 (0:00:00.867) 0:05:24.711 *** 2025-09-10 00:56:09.361875 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.361881 | orchestrator | 2025-09-10 00:56:09.361886 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.361891 | orchestrator | Wednesday 10 September 2025 00:50:10 +0000 (0:00:00.539) 0:05:25.250 *** 2025-09-10 00:56:09.361897 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.361902 | orchestrator | 2025-09-10 00:56:09.361907 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.361913 | orchestrator | Wednesday 10 September 2025 00:50:11 +0000 (0:00:00.496) 0:05:25.747 *** 2025-09-10 00:56:09.361918 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.361923 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.361928 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.361934 | orchestrator | 2025-09-10 00:56:09.361939 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.361944 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:00.965) 0:05:26.713 *** 2025-09-10 00:56:09.361950 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361955 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361960 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361966 | orchestrator | 2025-09-10 00:56:09.361971 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.361976 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:00.327) 0:05:27.041 *** 2025-09-10 00:56:09.361982 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.361987 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.361992 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.361997 | orchestrator | 2025-09-10 00:56:09.362006 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.362011 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:00.290) 0:05:27.332 *** 2025-09-10 00:56:09.362070 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362076 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362082 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362087 | orchestrator | 2025-09-10 00:56:09.362092 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.362098 | orchestrator | Wednesday 10 September 2025 00:50:12 +0000 (0:00:00.303) 0:05:27.635 *** 2025-09-10 00:56:09.362103 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362109 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362114 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362119 | orchestrator | 2025-09-10 00:56:09.362125 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.362130 | orchestrator | Wednesday 10 September 2025 00:50:13 +0000 (0:00:01.008) 0:05:28.644 *** 2025-09-10 00:56:09.362136 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362141 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362146 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362151 | orchestrator | 2025-09-10 00:56:09.362157 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.362162 | orchestrator | Wednesday 10 September 2025 00:50:14 +0000 (0:00:00.340) 0:05:28.985 *** 2025-09-10 00:56:09.362171 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362177 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362182 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362187 | orchestrator | 2025-09-10 00:56:09.362192 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.362198 | orchestrator | Wednesday 10 September 2025 00:50:14 +0000 (0:00:00.318) 0:05:29.304 *** 2025-09-10 00:56:09.362203 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362208 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362214 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362219 | orchestrator | 2025-09-10 00:56:09.362224 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.362230 | orchestrator | Wednesday 10 September 2025 00:50:15 +0000 (0:00:00.741) 0:05:30.045 *** 2025-09-10 00:56:09.362235 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362240 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362246 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362251 | orchestrator | 2025-09-10 00:56:09.362256 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.362261 | orchestrator | Wednesday 10 September 2025 00:50:16 +0000 (0:00:01.044) 0:05:31.090 *** 2025-09-10 00:56:09.362267 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362272 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362277 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362283 | orchestrator | 2025-09-10 00:56:09.362288 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.362293 | orchestrator | Wednesday 10 September 2025 00:50:16 +0000 (0:00:00.314) 0:05:31.405 *** 2025-09-10 00:56:09.362299 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362304 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362309 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362315 | orchestrator | 2025-09-10 00:56:09.362320 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.362325 | orchestrator | Wednesday 10 September 2025 00:50:17 +0000 (0:00:00.325) 0:05:31.730 *** 2025-09-10 00:56:09.362331 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362336 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362341 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362346 | orchestrator | 2025-09-10 00:56:09.362352 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.362357 | orchestrator | Wednesday 10 September 2025 00:50:17 +0000 (0:00:00.281) 0:05:32.012 *** 2025-09-10 00:56:09.362363 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362368 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362404 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362411 | orchestrator | 2025-09-10 00:56:09.362417 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.362422 | orchestrator | Wednesday 10 September 2025 00:50:17 +0000 (0:00:00.557) 0:05:32.570 *** 2025-09-10 00:56:09.362427 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362433 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362438 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362443 | orchestrator | 2025-09-10 00:56:09.362449 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.362454 | orchestrator | Wednesday 10 September 2025 00:50:18 +0000 (0:00:00.322) 0:05:32.892 *** 2025-09-10 00:56:09.362460 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362465 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362470 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362475 | orchestrator | 2025-09-10 00:56:09.362481 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.362486 | orchestrator | Wednesday 10 September 2025 00:50:18 +0000 (0:00:00.338) 0:05:33.230 *** 2025-09-10 00:56:09.362496 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362501 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362507 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362512 | orchestrator | 2025-09-10 00:56:09.362517 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.362522 | orchestrator | Wednesday 10 September 2025 00:50:18 +0000 (0:00:00.306) 0:05:33.537 *** 2025-09-10 00:56:09.362528 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362533 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362538 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362544 | orchestrator | 2025-09-10 00:56:09.362549 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.362555 | orchestrator | Wednesday 10 September 2025 00:50:19 +0000 (0:00:00.446) 0:05:33.983 *** 2025-09-10 00:56:09.362560 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362565 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362571 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362576 | orchestrator | 2025-09-10 00:56:09.362581 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.362590 | orchestrator | Wednesday 10 September 2025 00:50:20 +0000 (0:00:00.673) 0:05:34.656 *** 2025-09-10 00:56:09.362596 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362601 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362606 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362611 | orchestrator | 2025-09-10 00:56:09.362617 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-10 00:56:09.362622 | orchestrator | Wednesday 10 September 2025 00:50:20 +0000 (0:00:00.567) 0:05:35.224 *** 2025-09-10 00:56:09.362628 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-10 00:56:09.362633 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:56:09.362638 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:56:09.362644 | orchestrator | 2025-09-10 00:56:09.362649 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-10 00:56:09.362655 | orchestrator | Wednesday 10 September 2025 00:50:21 +0000 (0:00:00.853) 0:05:36.077 *** 2025-09-10 00:56:09.362660 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.362665 | orchestrator | 2025-09-10 00:56:09.362671 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-10 00:56:09.362676 | orchestrator | Wednesday 10 September 2025 00:50:22 +0000 (0:00:00.702) 0:05:36.780 *** 2025-09-10 00:56:09.362681 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.362687 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.362692 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.362697 | orchestrator | 2025-09-10 00:56:09.362703 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-10 00:56:09.362708 | orchestrator | Wednesday 10 September 2025 00:50:22 +0000 (0:00:00.629) 0:05:37.409 *** 2025-09-10 00:56:09.362713 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362719 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362724 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362729 | orchestrator | 2025-09-10 00:56:09.362735 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-10 00:56:09.362740 | orchestrator | Wednesday 10 September 2025 00:50:23 +0000 (0:00:00.271) 0:05:37.681 *** 2025-09-10 00:56:09.362745 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 00:56:09.362751 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 00:56:09.362756 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 00:56:09.362762 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-10 00:56:09.362767 | orchestrator | 2025-09-10 00:56:09.362772 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-10 00:56:09.362782 | orchestrator | Wednesday 10 September 2025 00:50:33 +0000 (0:00:10.830) 0:05:48.511 *** 2025-09-10 00:56:09.362788 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362793 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362798 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362804 | orchestrator | 2025-09-10 00:56:09.362809 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-10 00:56:09.362814 | orchestrator | Wednesday 10 September 2025 00:50:34 +0000 (0:00:00.598) 0:05:49.110 *** 2025-09-10 00:56:09.362820 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-10 00:56:09.362825 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-10 00:56:09.362830 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-10 00:56:09.362836 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-10 00:56:09.362841 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.362846 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.362852 | orchestrator | 2025-09-10 00:56:09.362873 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-10 00:56:09.362880 | orchestrator | Wednesday 10 September 2025 00:50:36 +0000 (0:00:02.147) 0:05:51.257 *** 2025-09-10 00:56:09.362885 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-10 00:56:09.362891 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-10 00:56:09.362896 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-10 00:56:09.362901 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 00:56:09.362907 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-10 00:56:09.362912 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-10 00:56:09.362917 | orchestrator | 2025-09-10 00:56:09.362923 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-10 00:56:09.362928 | orchestrator | Wednesday 10 September 2025 00:50:37 +0000 (0:00:01.252) 0:05:52.510 *** 2025-09-10 00:56:09.362933 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.362939 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.362944 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.362950 | orchestrator | 2025-09-10 00:56:09.362955 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-10 00:56:09.362960 | orchestrator | Wednesday 10 September 2025 00:50:38 +0000 (0:00:00.713) 0:05:53.223 *** 2025-09-10 00:56:09.362966 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.362971 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.362977 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.362982 | orchestrator | 2025-09-10 00:56:09.362987 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-10 00:56:09.362993 | orchestrator | Wednesday 10 September 2025 00:50:39 +0000 (0:00:00.564) 0:05:53.788 *** 2025-09-10 00:56:09.362998 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.363003 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.363009 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.363014 | orchestrator | 2025-09-10 00:56:09.363019 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-10 00:56:09.363025 | orchestrator | Wednesday 10 September 2025 00:50:39 +0000 (0:00:00.290) 0:05:54.078 *** 2025-09-10 00:56:09.363034 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.363039 | orchestrator | 2025-09-10 00:56:09.363044 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-10 00:56:09.363050 | orchestrator | Wednesday 10 September 2025 00:50:39 +0000 (0:00:00.532) 0:05:54.611 *** 2025-09-10 00:56:09.363055 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.363061 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.363066 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.363071 | orchestrator | 2025-09-10 00:56:09.363077 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-10 00:56:09.363087 | orchestrator | Wednesday 10 September 2025 00:50:40 +0000 (0:00:00.317) 0:05:54.928 *** 2025-09-10 00:56:09.363092 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.363097 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.363103 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.363108 | orchestrator | 2025-09-10 00:56:09.363113 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-10 00:56:09.363118 | orchestrator | Wednesday 10 September 2025 00:50:40 +0000 (0:00:00.680) 0:05:55.608 *** 2025-09-10 00:56:09.363124 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-10 00:56:09.363129 | orchestrator | 2025-09-10 00:56:09.363135 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-10 00:56:09.363140 | orchestrator | Wednesday 10 September 2025 00:50:41 +0000 (0:00:00.544) 0:05:56.153 *** 2025-09-10 00:56:09.363145 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.363151 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.363156 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.363161 | orchestrator | 2025-09-10 00:56:09.363167 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-10 00:56:09.363172 | orchestrator | Wednesday 10 September 2025 00:50:42 +0000 (0:00:01.359) 0:05:57.512 *** 2025-09-10 00:56:09.363177 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.363183 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.363188 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.363193 | orchestrator | 2025-09-10 00:56:09.363199 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-10 00:56:09.363204 | orchestrator | Wednesday 10 September 2025 00:50:44 +0000 (0:00:01.639) 0:05:59.152 *** 2025-09-10 00:56:09.363210 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.363215 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.363220 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.363226 | orchestrator | 2025-09-10 00:56:09.363231 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-10 00:56:09.363236 | orchestrator | Wednesday 10 September 2025 00:50:46 +0000 (0:00:01.819) 0:06:00.971 *** 2025-09-10 00:56:09.363242 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.363247 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.363253 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.363258 | orchestrator | 2025-09-10 00:56:09.363263 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-10 00:56:09.363269 | orchestrator | Wednesday 10 September 2025 00:50:48 +0000 (0:00:02.104) 0:06:03.075 *** 2025-09-10 00:56:09.363274 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.363279 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.363285 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-10 00:56:09.363290 | orchestrator | 2025-09-10 00:56:09.363295 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-10 00:56:09.363301 | orchestrator | Wednesday 10 September 2025 00:50:48 +0000 (0:00:00.392) 0:06:03.468 *** 2025-09-10 00:56:09.363306 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-10 00:56:09.363326 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-10 00:56:09.363333 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-10 00:56:09.363338 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-10 00:56:09.363344 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-10 00:56:09.363349 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.363358 | orchestrator | 2025-09-10 00:56:09.363363 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-10 00:56:09.363369 | orchestrator | Wednesday 10 September 2025 00:51:19 +0000 (0:00:30.721) 0:06:34.189 *** 2025-09-10 00:56:09.363374 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.363379 | orchestrator | 2025-09-10 00:56:09.363422 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-10 00:56:09.363427 | orchestrator | Wednesday 10 September 2025 00:51:20 +0000 (0:00:01.310) 0:06:35.500 *** 2025-09-10 00:56:09.363433 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.363438 | orchestrator | 2025-09-10 00:56:09.363443 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-10 00:56:09.363449 | orchestrator | Wednesday 10 September 2025 00:51:21 +0000 (0:00:00.321) 0:06:35.821 *** 2025-09-10 00:56:09.363454 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.363459 | orchestrator | 2025-09-10 00:56:09.363465 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-10 00:56:09.363470 | orchestrator | Wednesday 10 September 2025 00:51:21 +0000 (0:00:00.145) 0:06:35.967 *** 2025-09-10 00:56:09.363476 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-10 00:56:09.363481 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-10 00:56:09.363491 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-10 00:56:09.363497 | orchestrator | 2025-09-10 00:56:09.363502 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-10 00:56:09.363508 | orchestrator | Wednesday 10 September 2025 00:51:27 +0000 (0:00:06.358) 0:06:42.326 *** 2025-09-10 00:56:09.363513 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-10 00:56:09.363518 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-10 00:56:09.363524 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-10 00:56:09.363529 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-10 00:56:09.363534 | orchestrator | 2025-09-10 00:56:09.363540 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-10 00:56:09.363545 | orchestrator | Wednesday 10 September 2025 00:51:32 +0000 (0:00:04.746) 0:06:47.073 *** 2025-09-10 00:56:09.363551 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.363556 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.363561 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.363567 | orchestrator | 2025-09-10 00:56:09.363572 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-10 00:56:09.363577 | orchestrator | Wednesday 10 September 2025 00:51:33 +0000 (0:00:00.932) 0:06:48.005 *** 2025-09-10 00:56:09.363583 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-10 00:56:09.363588 | orchestrator | 2025-09-10 00:56:09.363594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-10 00:56:09.363599 | orchestrator | Wednesday 10 September 2025 00:51:34 +0000 (0:00:00.680) 0:06:48.686 *** 2025-09-10 00:56:09.363604 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.363610 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.363615 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.363620 | orchestrator | 2025-09-10 00:56:09.363626 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-10 00:56:09.363631 | orchestrator | Wednesday 10 September 2025 00:51:34 +0000 (0:00:00.336) 0:06:49.023 *** 2025-09-10 00:56:09.363636 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.363642 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.363647 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.363652 | orchestrator | 2025-09-10 00:56:09.363657 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-10 00:56:09.363669 | orchestrator | Wednesday 10 September 2025 00:51:35 +0000 (0:00:01.618) 0:06:50.641 *** 2025-09-10 00:56:09.363674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-10 00:56:09.363680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-10 00:56:09.363685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-10 00:56:09.363690 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.363696 | orchestrator | 2025-09-10 00:56:09.363701 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-10 00:56:09.363706 | orchestrator | Wednesday 10 September 2025 00:51:36 +0000 (0:00:00.636) 0:06:51.278 *** 2025-09-10 00:56:09.363712 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.363717 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.363722 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.363728 | orchestrator | 2025-09-10 00:56:09.363733 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-10 00:56:09.363738 | orchestrator | 2025-09-10 00:56:09.363744 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.363749 | orchestrator | Wednesday 10 September 2025 00:51:37 +0000 (0:00:00.552) 0:06:51.831 *** 2025-09-10 00:56:09.363755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.363760 | orchestrator | 2025-09-10 00:56:09.363784 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.363791 | orchestrator | Wednesday 10 September 2025 00:51:37 +0000 (0:00:00.694) 0:06:52.525 *** 2025-09-10 00:56:09.363796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.363802 | orchestrator | 2025-09-10 00:56:09.363807 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.363812 | orchestrator | Wednesday 10 September 2025 00:51:38 +0000 (0:00:00.528) 0:06:53.054 *** 2025-09-10 00:56:09.363817 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.363823 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.363828 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.363833 | orchestrator | 2025-09-10 00:56:09.363839 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.363844 | orchestrator | Wednesday 10 September 2025 00:51:38 +0000 (0:00:00.325) 0:06:53.379 *** 2025-09-10 00:56:09.363849 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.363854 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.363860 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.363865 | orchestrator | 2025-09-10 00:56:09.363870 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.363875 | orchestrator | Wednesday 10 September 2025 00:51:39 +0000 (0:00:00.974) 0:06:54.354 *** 2025-09-10 00:56:09.363881 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.363886 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.363891 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.363896 | orchestrator | 2025-09-10 00:56:09.363901 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.363907 | orchestrator | Wednesday 10 September 2025 00:51:40 +0000 (0:00:00.749) 0:06:55.104 *** 2025-09-10 00:56:09.363912 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.363917 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.363922 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.363928 | orchestrator | 2025-09-10 00:56:09.363933 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.363941 | orchestrator | Wednesday 10 September 2025 00:51:41 +0000 (0:00:00.707) 0:06:55.812 *** 2025-09-10 00:56:09.363947 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.363952 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.363958 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.363963 | orchestrator | 2025-09-10 00:56:09.363973 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.363978 | orchestrator | Wednesday 10 September 2025 00:51:41 +0000 (0:00:00.282) 0:06:56.094 *** 2025-09-10 00:56:09.363983 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.363989 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.363994 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.363999 | orchestrator | 2025-09-10 00:56:09.364005 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.364010 | orchestrator | Wednesday 10 September 2025 00:51:42 +0000 (0:00:00.594) 0:06:56.689 *** 2025-09-10 00:56:09.364015 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364021 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364026 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364031 | orchestrator | 2025-09-10 00:56:09.364036 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.364042 | orchestrator | Wednesday 10 September 2025 00:51:42 +0000 (0:00:00.292) 0:06:56.981 *** 2025-09-10 00:56:09.364047 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364052 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364057 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364063 | orchestrator | 2025-09-10 00:56:09.364068 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.364073 | orchestrator | Wednesday 10 September 2025 00:51:43 +0000 (0:00:00.737) 0:06:57.719 *** 2025-09-10 00:56:09.364079 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364084 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364089 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364094 | orchestrator | 2025-09-10 00:56:09.364100 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.364105 | orchestrator | Wednesday 10 September 2025 00:51:43 +0000 (0:00:00.727) 0:06:58.446 *** 2025-09-10 00:56:09.364110 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364115 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364121 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364126 | orchestrator | 2025-09-10 00:56:09.364131 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.364136 | orchestrator | Wednesday 10 September 2025 00:51:44 +0000 (0:00:00.618) 0:06:59.065 *** 2025-09-10 00:56:09.364142 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364147 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364152 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364158 | orchestrator | 2025-09-10 00:56:09.364163 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.364168 | orchestrator | Wednesday 10 September 2025 00:51:44 +0000 (0:00:00.334) 0:06:59.399 *** 2025-09-10 00:56:09.364173 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364179 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364184 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364189 | orchestrator | 2025-09-10 00:56:09.364195 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.364200 | orchestrator | Wednesday 10 September 2025 00:51:45 +0000 (0:00:00.331) 0:06:59.731 *** 2025-09-10 00:56:09.364205 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364210 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364216 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364221 | orchestrator | 2025-09-10 00:56:09.364226 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.364232 | orchestrator | Wednesday 10 September 2025 00:51:45 +0000 (0:00:00.327) 0:07:00.058 *** 2025-09-10 00:56:09.364237 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364242 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364248 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364253 | orchestrator | 2025-09-10 00:56:09.364261 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.364266 | orchestrator | Wednesday 10 September 2025 00:51:46 +0000 (0:00:00.613) 0:07:00.671 *** 2025-09-10 00:56:09.364275 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364281 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364286 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364291 | orchestrator | 2025-09-10 00:56:09.364297 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.364302 | orchestrator | Wednesday 10 September 2025 00:51:46 +0000 (0:00:00.304) 0:07:00.976 *** 2025-09-10 00:56:09.364307 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364313 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364318 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364323 | orchestrator | 2025-09-10 00:56:09.364328 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.364334 | orchestrator | Wednesday 10 September 2025 00:51:46 +0000 (0:00:00.292) 0:07:01.268 *** 2025-09-10 00:56:09.364339 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364344 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364349 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364354 | orchestrator | 2025-09-10 00:56:09.364360 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.364365 | orchestrator | Wednesday 10 September 2025 00:51:46 +0000 (0:00:00.293) 0:07:01.562 *** 2025-09-10 00:56:09.364370 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364376 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364412 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364419 | orchestrator | 2025-09-10 00:56:09.364424 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.364429 | orchestrator | Wednesday 10 September 2025 00:51:47 +0000 (0:00:00.626) 0:07:02.189 *** 2025-09-10 00:56:09.364435 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364440 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364445 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364451 | orchestrator | 2025-09-10 00:56:09.364456 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-10 00:56:09.364461 | orchestrator | Wednesday 10 September 2025 00:51:48 +0000 (0:00:00.528) 0:07:02.718 *** 2025-09-10 00:56:09.364470 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364475 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364481 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364486 | orchestrator | 2025-09-10 00:56:09.364491 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-10 00:56:09.364496 | orchestrator | Wednesday 10 September 2025 00:51:48 +0000 (0:00:00.342) 0:07:03.061 *** 2025-09-10 00:56:09.364502 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:56:09.364507 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:56:09.364512 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:56:09.364517 | orchestrator | 2025-09-10 00:56:09.364523 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-10 00:56:09.364528 | orchestrator | Wednesday 10 September 2025 00:51:49 +0000 (0:00:00.900) 0:07:03.962 *** 2025-09-10 00:56:09.364533 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.364539 | orchestrator | 2025-09-10 00:56:09.364544 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-10 00:56:09.364549 | orchestrator | Wednesday 10 September 2025 00:51:50 +0000 (0:00:00.836) 0:07:04.798 *** 2025-09-10 00:56:09.364555 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364560 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364565 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364570 | orchestrator | 2025-09-10 00:56:09.364576 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-10 00:56:09.364581 | orchestrator | Wednesday 10 September 2025 00:51:50 +0000 (0:00:00.313) 0:07:05.112 *** 2025-09-10 00:56:09.364590 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364596 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364601 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364606 | orchestrator | 2025-09-10 00:56:09.364611 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-10 00:56:09.364617 | orchestrator | Wednesday 10 September 2025 00:51:50 +0000 (0:00:00.342) 0:07:05.454 *** 2025-09-10 00:56:09.364622 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364627 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364633 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364638 | orchestrator | 2025-09-10 00:56:09.364643 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-10 00:56:09.364648 | orchestrator | Wednesday 10 September 2025 00:51:51 +0000 (0:00:00.874) 0:07:06.329 *** 2025-09-10 00:56:09.364654 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.364659 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.364664 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.364669 | orchestrator | 2025-09-10 00:56:09.364675 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-10 00:56:09.364680 | orchestrator | Wednesday 10 September 2025 00:51:52 +0000 (0:00:00.342) 0:07:06.671 *** 2025-09-10 00:56:09.364685 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-10 00:56:09.364691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-10 00:56:09.364696 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-10 00:56:09.364701 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-10 00:56:09.364706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-10 00:56:09.364712 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-10 00:56:09.364721 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-10 00:56:09.364726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-10 00:56:09.364732 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-10 00:56:09.364737 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-10 00:56:09.364742 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-10 00:56:09.364747 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-10 00:56:09.364753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-10 00:56:09.364758 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-10 00:56:09.364763 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-10 00:56:09.364768 | orchestrator | 2025-09-10 00:56:09.364774 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-10 00:56:09.364779 | orchestrator | Wednesday 10 September 2025 00:51:53 +0000 (0:00:01.932) 0:07:08.604 *** 2025-09-10 00:56:09.364784 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.364789 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.364795 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.364800 | orchestrator | 2025-09-10 00:56:09.364805 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-10 00:56:09.364811 | orchestrator | Wednesday 10 September 2025 00:51:54 +0000 (0:00:00.339) 0:07:08.943 *** 2025-09-10 00:56:09.364816 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.364825 | orchestrator | 2025-09-10 00:56:09.364830 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-10 00:56:09.364838 | orchestrator | Wednesday 10 September 2025 00:51:55 +0000 (0:00:00.792) 0:07:09.736 *** 2025-09-10 00:56:09.364844 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-10 00:56:09.364849 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-10 00:56:09.364858 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-10 00:56:09.364868 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-10 00:56:09.364878 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-10 00:56:09.364888 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-10 00:56:09.364897 | orchestrator | 2025-09-10 00:56:09.364906 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-10 00:56:09.364916 | orchestrator | Wednesday 10 September 2025 00:51:56 +0000 (0:00:01.020) 0:07:10.756 *** 2025-09-10 00:56:09.364922 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.364927 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-10 00:56:09.364932 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:56:09.364938 | orchestrator | 2025-09-10 00:56:09.364942 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-10 00:56:09.364947 | orchestrator | Wednesday 10 September 2025 00:51:58 +0000 (0:00:02.310) 0:07:13.067 *** 2025-09-10 00:56:09.364952 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 00:56:09.364957 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-10 00:56:09.364961 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.364966 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 00:56:09.364971 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-10 00:56:09.364976 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.364980 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 00:56:09.364985 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-10 00:56:09.364990 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.364994 | orchestrator | 2025-09-10 00:56:09.364999 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-10 00:56:09.365004 | orchestrator | Wednesday 10 September 2025 00:51:59 +0000 (0:00:01.403) 0:07:14.471 *** 2025-09-10 00:56:09.365008 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.365013 | orchestrator | 2025-09-10 00:56:09.365018 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-10 00:56:09.365023 | orchestrator | Wednesday 10 September 2025 00:52:02 +0000 (0:00:02.217) 0:07:16.688 *** 2025-09-10 00:56:09.365027 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.365032 | orchestrator | 2025-09-10 00:56:09.365037 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-10 00:56:09.365042 | orchestrator | Wednesday 10 September 2025 00:52:02 +0000 (0:00:00.517) 0:07:17.205 *** 2025-09-10 00:56:09.365046 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-36dac960-67a7-54a4-bbd2-b6f8976b18f7', 'data_vg': 'ceph-36dac960-67a7-54a4-bbd2-b6f8976b18f7'}) 2025-09-10 00:56:09.365052 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4b73e898-cb4c-523f-8aca-971ee560c7ea', 'data_vg': 'ceph-4b73e898-cb4c-523f-8aca-971ee560c7ea'}) 2025-09-10 00:56:09.365057 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-20419d67-2a88-5ee6-832e-dd0a34a7687a', 'data_vg': 'ceph-20419d67-2a88-5ee6-832e-dd0a34a7687a'}) 2025-09-10 00:56:09.365065 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f4115e81-926e-57fb-8145-65084efa4466', 'data_vg': 'ceph-f4115e81-926e-57fb-8145-65084efa4466'}) 2025-09-10 00:56:09.365070 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-28e77ae9-929e-5c68-8a2a-91f3bea00aca', 'data_vg': 'ceph-28e77ae9-929e-5c68-8a2a-91f3bea00aca'}) 2025-09-10 00:56:09.365078 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2bea83b6-6800-529c-bdd8-a613f3421a6f', 'data_vg': 'ceph-2bea83b6-6800-529c-bdd8-a613f3421a6f'}) 2025-09-10 00:56:09.365083 | orchestrator | 2025-09-10 00:56:09.365088 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-10 00:56:09.365093 | orchestrator | Wednesday 10 September 2025 00:52:44 +0000 (0:00:41.506) 0:07:58.712 *** 2025-09-10 00:56:09.365097 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365102 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365107 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365112 | orchestrator | 2025-09-10 00:56:09.365116 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-10 00:56:09.365121 | orchestrator | Wednesday 10 September 2025 00:52:44 +0000 (0:00:00.631) 0:07:59.343 *** 2025-09-10 00:56:09.365126 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.365131 | orchestrator | 2025-09-10 00:56:09.365135 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-10 00:56:09.365140 | orchestrator | Wednesday 10 September 2025 00:52:45 +0000 (0:00:00.562) 0:07:59.906 *** 2025-09-10 00:56:09.365145 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.365150 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.365154 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.365159 | orchestrator | 2025-09-10 00:56:09.365164 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-10 00:56:09.365169 | orchestrator | Wednesday 10 September 2025 00:52:45 +0000 (0:00:00.658) 0:08:00.565 *** 2025-09-10 00:56:09.365173 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.365181 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.365186 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.365190 | orchestrator | 2025-09-10 00:56:09.365195 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-10 00:56:09.365200 | orchestrator | Wednesday 10 September 2025 00:52:48 +0000 (0:00:02.848) 0:08:03.414 *** 2025-09-10 00:56:09.365205 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.365209 | orchestrator | 2025-09-10 00:56:09.365214 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-10 00:56:09.365219 | orchestrator | Wednesday 10 September 2025 00:52:49 +0000 (0:00:00.553) 0:08:03.968 *** 2025-09-10 00:56:09.365223 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.365228 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.365233 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.365238 | orchestrator | 2025-09-10 00:56:09.365242 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-10 00:56:09.365247 | orchestrator | Wednesday 10 September 2025 00:52:50 +0000 (0:00:01.198) 0:08:05.167 *** 2025-09-10 00:56:09.365252 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.365257 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.365261 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.365266 | orchestrator | 2025-09-10 00:56:09.365271 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-10 00:56:09.365275 | orchestrator | Wednesday 10 September 2025 00:52:52 +0000 (0:00:01.487) 0:08:06.654 *** 2025-09-10 00:56:09.365280 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.365285 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.365290 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.365294 | orchestrator | 2025-09-10 00:56:09.365299 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-10 00:56:09.365304 | orchestrator | Wednesday 10 September 2025 00:52:53 +0000 (0:00:01.887) 0:08:08.542 *** 2025-09-10 00:56:09.365309 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365317 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365322 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365326 | orchestrator | 2025-09-10 00:56:09.365331 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-10 00:56:09.365336 | orchestrator | Wednesday 10 September 2025 00:52:54 +0000 (0:00:00.397) 0:08:08.939 *** 2025-09-10 00:56:09.365341 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365345 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365350 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365355 | orchestrator | 2025-09-10 00:56:09.365360 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-10 00:56:09.365364 | orchestrator | Wednesday 10 September 2025 00:52:54 +0000 (0:00:00.424) 0:08:09.364 *** 2025-09-10 00:56:09.365369 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-10 00:56:09.365374 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-10 00:56:09.365379 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-10 00:56:09.365395 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-10 00:56:09.365400 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-10 00:56:09.365405 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-10 00:56:09.365409 | orchestrator | 2025-09-10 00:56:09.365414 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-10 00:56:09.365419 | orchestrator | Wednesday 10 September 2025 00:52:56 +0000 (0:00:01.550) 0:08:10.914 *** 2025-09-10 00:56:09.365424 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-10 00:56:09.365428 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-10 00:56:09.365433 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-10 00:56:09.365438 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-10 00:56:09.365442 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-10 00:56:09.365447 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-10 00:56:09.365452 | orchestrator | 2025-09-10 00:56:09.365459 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-10 00:56:09.365464 | orchestrator | Wednesday 10 September 2025 00:52:58 +0000 (0:00:02.137) 0:08:13.052 *** 2025-09-10 00:56:09.365469 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-10 00:56:09.365474 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-10 00:56:09.365479 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-10 00:56:09.365483 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-10 00:56:09.365488 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-10 00:56:09.365493 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-10 00:56:09.365498 | orchestrator | 2025-09-10 00:56:09.365502 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-10 00:56:09.365507 | orchestrator | Wednesday 10 September 2025 00:53:01 +0000 (0:00:03.497) 0:08:16.549 *** 2025-09-10 00:56:09.365512 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365517 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.365526 | orchestrator | 2025-09-10 00:56:09.365531 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-10 00:56:09.365536 | orchestrator | Wednesday 10 September 2025 00:53:04 +0000 (0:00:02.617) 0:08:19.167 *** 2025-09-10 00:56:09.365540 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365545 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365550 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-10 00:56:09.365555 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.365559 | orchestrator | 2025-09-10 00:56:09.365564 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-10 00:56:09.365569 | orchestrator | Wednesday 10 September 2025 00:53:17 +0000 (0:00:12.973) 0:08:32.140 *** 2025-09-10 00:56:09.365577 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365582 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365586 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365591 | orchestrator | 2025-09-10 00:56:09.365599 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-10 00:56:09.365603 | orchestrator | Wednesday 10 September 2025 00:53:18 +0000 (0:00:00.821) 0:08:32.962 *** 2025-09-10 00:56:09.365608 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365613 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365618 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365623 | orchestrator | 2025-09-10 00:56:09.365627 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-10 00:56:09.365632 | orchestrator | Wednesday 10 September 2025 00:53:18 +0000 (0:00:00.611) 0:08:33.573 *** 2025-09-10 00:56:09.365637 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.365642 | orchestrator | 2025-09-10 00:56:09.365646 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-10 00:56:09.365651 | orchestrator | Wednesday 10 September 2025 00:53:19 +0000 (0:00:00.510) 0:08:34.083 *** 2025-09-10 00:56:09.365656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.365661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.365666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.365670 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365675 | orchestrator | 2025-09-10 00:56:09.365680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-10 00:56:09.365684 | orchestrator | Wednesday 10 September 2025 00:53:19 +0000 (0:00:00.403) 0:08:34.487 *** 2025-09-10 00:56:09.365689 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365694 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365698 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365703 | orchestrator | 2025-09-10 00:56:09.365708 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-10 00:56:09.365713 | orchestrator | Wednesday 10 September 2025 00:53:20 +0000 (0:00:00.350) 0:08:34.837 *** 2025-09-10 00:56:09.365717 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365722 | orchestrator | 2025-09-10 00:56:09.365727 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-10 00:56:09.365732 | orchestrator | Wednesday 10 September 2025 00:53:20 +0000 (0:00:00.250) 0:08:35.088 *** 2025-09-10 00:56:09.365736 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365741 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365746 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365751 | orchestrator | 2025-09-10 00:56:09.365755 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-10 00:56:09.365760 | orchestrator | Wednesday 10 September 2025 00:53:21 +0000 (0:00:00.729) 0:08:35.817 *** 2025-09-10 00:56:09.365765 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365770 | orchestrator | 2025-09-10 00:56:09.365774 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-10 00:56:09.365779 | orchestrator | Wednesday 10 September 2025 00:53:21 +0000 (0:00:00.217) 0:08:36.034 *** 2025-09-10 00:56:09.365784 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365789 | orchestrator | 2025-09-10 00:56:09.365794 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-10 00:56:09.365798 | orchestrator | Wednesday 10 September 2025 00:53:21 +0000 (0:00:00.212) 0:08:36.247 *** 2025-09-10 00:56:09.365803 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365808 | orchestrator | 2025-09-10 00:56:09.365813 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-10 00:56:09.365817 | orchestrator | Wednesday 10 September 2025 00:53:21 +0000 (0:00:00.151) 0:08:36.398 *** 2025-09-10 00:56:09.365822 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365830 | orchestrator | 2025-09-10 00:56:09.365835 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-10 00:56:09.365840 | orchestrator | Wednesday 10 September 2025 00:53:21 +0000 (0:00:00.206) 0:08:36.605 *** 2025-09-10 00:56:09.365847 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365852 | orchestrator | 2025-09-10 00:56:09.365856 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-10 00:56:09.365861 | orchestrator | Wednesday 10 September 2025 00:53:22 +0000 (0:00:00.214) 0:08:36.819 *** 2025-09-10 00:56:09.365866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.365871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.365875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.365880 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365885 | orchestrator | 2025-09-10 00:56:09.365889 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-10 00:56:09.365894 | orchestrator | Wednesday 10 September 2025 00:53:22 +0000 (0:00:00.440) 0:08:37.259 *** 2025-09-10 00:56:09.365899 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365904 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.365909 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.365913 | orchestrator | 2025-09-10 00:56:09.365918 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-10 00:56:09.365923 | orchestrator | Wednesday 10 September 2025 00:53:22 +0000 (0:00:00.339) 0:08:37.599 *** 2025-09-10 00:56:09.365927 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365932 | orchestrator | 2025-09-10 00:56:09.365937 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-10 00:56:09.365941 | orchestrator | Wednesday 10 September 2025 00:53:23 +0000 (0:00:00.836) 0:08:38.436 *** 2025-09-10 00:56:09.365946 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.365951 | orchestrator | 2025-09-10 00:56:09.365956 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-10 00:56:09.365960 | orchestrator | 2025-09-10 00:56:09.365965 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.365970 | orchestrator | Wednesday 10 September 2025 00:53:24 +0000 (0:00:00.681) 0:08:39.117 *** 2025-09-10 00:56:09.365979 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.365984 | orchestrator | 2025-09-10 00:56:09.365989 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.365994 | orchestrator | Wednesday 10 September 2025 00:53:25 +0000 (0:00:01.276) 0:08:40.394 *** 2025-09-10 00:56:09.365998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.366003 | orchestrator | 2025-09-10 00:56:09.366008 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.366013 | orchestrator | Wednesday 10 September 2025 00:53:27 +0000 (0:00:01.413) 0:08:41.808 *** 2025-09-10 00:56:09.366041 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366046 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366051 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366055 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366060 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366065 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366070 | orchestrator | 2025-09-10 00:56:09.366075 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.366079 | orchestrator | Wednesday 10 September 2025 00:53:28 +0000 (0:00:01.234) 0:08:43.043 *** 2025-09-10 00:56:09.366084 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366089 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366098 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366102 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366107 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366112 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366117 | orchestrator | 2025-09-10 00:56:09.366122 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.366126 | orchestrator | Wednesday 10 September 2025 00:53:29 +0000 (0:00:00.702) 0:08:43.745 *** 2025-09-10 00:56:09.366131 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366136 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366141 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366145 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366150 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366155 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366160 | orchestrator | 2025-09-10 00:56:09.366164 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.366169 | orchestrator | Wednesday 10 September 2025 00:53:30 +0000 (0:00:01.063) 0:08:44.809 *** 2025-09-10 00:56:09.366174 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366179 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366184 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366188 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366193 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366198 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366203 | orchestrator | 2025-09-10 00:56:09.366207 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.366212 | orchestrator | Wednesday 10 September 2025 00:53:30 +0000 (0:00:00.782) 0:08:45.592 *** 2025-09-10 00:56:09.366217 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366222 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366227 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366231 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366236 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366241 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366246 | orchestrator | 2025-09-10 00:56:09.366250 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.366255 | orchestrator | Wednesday 10 September 2025 00:53:31 +0000 (0:00:01.041) 0:08:46.633 *** 2025-09-10 00:56:09.366260 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366265 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366270 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366274 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366279 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366286 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366291 | orchestrator | 2025-09-10 00:56:09.366296 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.366301 | orchestrator | Wednesday 10 September 2025 00:53:32 +0000 (0:00:00.997) 0:08:47.631 *** 2025-09-10 00:56:09.366306 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366310 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366315 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366320 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366324 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366329 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366334 | orchestrator | 2025-09-10 00:56:09.366338 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.366343 | orchestrator | Wednesday 10 September 2025 00:53:33 +0000 (0:00:00.580) 0:08:48.211 *** 2025-09-10 00:56:09.366348 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366353 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366357 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366362 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366367 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366372 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366380 | orchestrator | 2025-09-10 00:56:09.366399 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.366404 | orchestrator | Wednesday 10 September 2025 00:53:35 +0000 (0:00:01.470) 0:08:49.682 *** 2025-09-10 00:56:09.366409 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366413 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366418 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366423 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366427 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366432 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366437 | orchestrator | 2025-09-10 00:56:09.366442 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.366446 | orchestrator | Wednesday 10 September 2025 00:53:36 +0000 (0:00:01.131) 0:08:50.813 *** 2025-09-10 00:56:09.366451 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366456 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366461 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366465 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366473 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366478 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366483 | orchestrator | 2025-09-10 00:56:09.366487 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.366492 | orchestrator | Wednesday 10 September 2025 00:53:37 +0000 (0:00:00.957) 0:08:51.771 *** 2025-09-10 00:56:09.366497 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366502 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366507 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366511 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366516 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366521 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366526 | orchestrator | 2025-09-10 00:56:09.366530 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.366535 | orchestrator | Wednesday 10 September 2025 00:53:37 +0000 (0:00:00.609) 0:08:52.381 *** 2025-09-10 00:56:09.366540 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366545 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366549 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366554 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366559 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366564 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366568 | orchestrator | 2025-09-10 00:56:09.366573 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.366578 | orchestrator | Wednesday 10 September 2025 00:53:38 +0000 (0:00:00.893) 0:08:53.275 *** 2025-09-10 00:56:09.366583 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366587 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366592 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366597 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366602 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366607 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366611 | orchestrator | 2025-09-10 00:56:09.366616 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.366621 | orchestrator | Wednesday 10 September 2025 00:53:39 +0000 (0:00:00.605) 0:08:53.880 *** 2025-09-10 00:56:09.366626 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366630 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366635 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366640 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366644 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366649 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366654 | orchestrator | 2025-09-10 00:56:09.366659 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.366664 | orchestrator | Wednesday 10 September 2025 00:53:40 +0000 (0:00:00.817) 0:08:54.698 *** 2025-09-10 00:56:09.366669 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366677 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366682 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366687 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366691 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366696 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366701 | orchestrator | 2025-09-10 00:56:09.366706 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.366711 | orchestrator | Wednesday 10 September 2025 00:53:40 +0000 (0:00:00.581) 0:08:55.279 *** 2025-09-10 00:56:09.366715 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366720 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366725 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366729 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:09.366734 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:09.366739 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:09.366743 | orchestrator | 2025-09-10 00:56:09.366748 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.366753 | orchestrator | Wednesday 10 September 2025 00:53:41 +0000 (0:00:00.807) 0:08:56.086 *** 2025-09-10 00:56:09.366758 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.366763 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.366767 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.366775 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366779 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366784 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366789 | orchestrator | 2025-09-10 00:56:09.366794 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.366799 | orchestrator | Wednesday 10 September 2025 00:53:42 +0000 (0:00:00.586) 0:08:56.673 *** 2025-09-10 00:56:09.366803 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366808 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366813 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366818 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366822 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366827 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366832 | orchestrator | 2025-09-10 00:56:09.366837 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.366841 | orchestrator | Wednesday 10 September 2025 00:53:42 +0000 (0:00:00.854) 0:08:57.528 *** 2025-09-10 00:56:09.366846 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.366851 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.366856 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.366861 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366865 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.366870 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.366875 | orchestrator | 2025-09-10 00:56:09.366880 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-10 00:56:09.366884 | orchestrator | Wednesday 10 September 2025 00:53:44 +0000 (0:00:01.222) 0:08:58.750 *** 2025-09-10 00:56:09.366889 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.366894 | orchestrator | 2025-09-10 00:56:09.366899 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-10 00:56:09.366904 | orchestrator | Wednesday 10 September 2025 00:53:48 +0000 (0:00:04.061) 0:09:02.812 *** 2025-09-10 00:56:09.366908 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.366913 | orchestrator | 2025-09-10 00:56:09.366918 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-10 00:56:09.366923 | orchestrator | Wednesday 10 September 2025 00:53:50 +0000 (0:00:02.095) 0:09:04.908 *** 2025-09-10 00:56:09.366927 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.366935 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.366940 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.366944 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.366949 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.366957 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.366962 | orchestrator | 2025-09-10 00:56:09.366967 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-10 00:56:09.366972 | orchestrator | Wednesday 10 September 2025 00:53:51 +0000 (0:00:01.735) 0:09:06.644 *** 2025-09-10 00:56:09.366976 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.366981 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.366986 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.366991 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.366995 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.367000 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.367005 | orchestrator | 2025-09-10 00:56:09.367010 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-10 00:56:09.367014 | orchestrator | Wednesday 10 September 2025 00:53:53 +0000 (0:00:01.351) 0:09:07.995 *** 2025-09-10 00:56:09.367019 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.367024 | orchestrator | 2025-09-10 00:56:09.367029 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-10 00:56:09.367034 | orchestrator | Wednesday 10 September 2025 00:53:54 +0000 (0:00:01.030) 0:09:09.025 *** 2025-09-10 00:56:09.367038 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.367043 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.367048 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.367053 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.367057 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.367062 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.367067 | orchestrator | 2025-09-10 00:56:09.367071 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-10 00:56:09.367076 | orchestrator | Wednesday 10 September 2025 00:53:56 +0000 (0:00:01.658) 0:09:10.684 *** 2025-09-10 00:56:09.367081 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.367086 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.367090 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.367095 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.367100 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.367104 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.367109 | orchestrator | 2025-09-10 00:56:09.367114 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-10 00:56:09.367119 | orchestrator | Wednesday 10 September 2025 00:53:59 +0000 (0:00:03.606) 0:09:14.290 *** 2025-09-10 00:56:09.367124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:09.367128 | orchestrator | 2025-09-10 00:56:09.367133 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-10 00:56:09.367138 | orchestrator | Wednesday 10 September 2025 00:54:00 +0000 (0:00:01.341) 0:09:15.631 *** 2025-09-10 00:56:09.367143 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.367147 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.367152 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.367157 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.367162 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.367166 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.367171 | orchestrator | 2025-09-10 00:56:09.367176 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-10 00:56:09.367181 | orchestrator | Wednesday 10 September 2025 00:54:01 +0000 (0:00:00.625) 0:09:16.257 *** 2025-09-10 00:56:09.367185 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.367190 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.367195 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.367202 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:09.367206 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:09.367214 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:09.367219 | orchestrator | 2025-09-10 00:56:09.367224 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-10 00:56:09.367229 | orchestrator | Wednesday 10 September 2025 00:54:04 +0000 (0:00:02.680) 0:09:18.938 *** 2025-09-10 00:56:09.367233 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.367238 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.367243 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.367248 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:09.367252 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:09.367257 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:09.367262 | orchestrator | 2025-09-10 00:56:09.367267 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-10 00:56:09.367271 | orchestrator | 2025-09-10 00:56:09.367276 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.367281 | orchestrator | Wednesday 10 September 2025 00:54:05 +0000 (0:00:00.877) 0:09:19.816 *** 2025-09-10 00:56:09.367286 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.367291 | orchestrator | 2025-09-10 00:56:09.367296 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.367300 | orchestrator | Wednesday 10 September 2025 00:54:06 +0000 (0:00:00.884) 0:09:20.700 *** 2025-09-10 00:56:09.367305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.367310 | orchestrator | 2025-09-10 00:56:09.367315 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.367320 | orchestrator | Wednesday 10 September 2025 00:54:06 +0000 (0:00:00.570) 0:09:21.270 *** 2025-09-10 00:56:09.367324 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.367329 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.367334 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.367339 | orchestrator | 2025-09-10 00:56:09.367346 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.367351 | orchestrator | Wednesday 10 September 2025 00:54:07 +0000 (0:00:00.620) 0:09:21.891 *** 2025-09-10 00:56:09.367356 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.367360 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.367365 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.367370 | orchestrator | 2025-09-10 00:56:09.367375 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.367379 | orchestrator | Wednesday 10 September 2025 00:54:07 +0000 (0:00:00.711) 0:09:22.602 *** 2025-09-10 00:56:09.367888 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.367894 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.367899 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.367904 | orchestrator | 2025-09-10 00:56:09.367908 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.367913 | orchestrator | Wednesday 10 September 2025 00:54:08 +0000 (0:00:00.737) 0:09:23.340 *** 2025-09-10 00:56:09.367918 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.367923 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.367927 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.367932 | orchestrator | 2025-09-10 00:56:09.367937 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.367942 | orchestrator | Wednesday 10 September 2025 00:54:09 +0000 (0:00:00.797) 0:09:24.137 *** 2025-09-10 00:56:09.367947 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.367952 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.367956 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.367961 | orchestrator | 2025-09-10 00:56:09.367966 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.367971 | orchestrator | Wednesday 10 September 2025 00:54:10 +0000 (0:00:00.624) 0:09:24.762 *** 2025-09-10 00:56:09.367981 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.367986 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.367990 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.367995 | orchestrator | 2025-09-10 00:56:09.368000 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.368004 | orchestrator | Wednesday 10 September 2025 00:54:10 +0000 (0:00:00.350) 0:09:25.112 *** 2025-09-10 00:56:09.368009 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368014 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368018 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368023 | orchestrator | 2025-09-10 00:56:09.368028 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.368032 | orchestrator | Wednesday 10 September 2025 00:54:10 +0000 (0:00:00.288) 0:09:25.401 *** 2025-09-10 00:56:09.368037 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368042 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368047 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368051 | orchestrator | 2025-09-10 00:56:09.368056 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.368061 | orchestrator | Wednesday 10 September 2025 00:54:11 +0000 (0:00:00.745) 0:09:26.147 *** 2025-09-10 00:56:09.368065 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368070 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368075 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368080 | orchestrator | 2025-09-10 00:56:09.368084 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.368089 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:01.049) 0:09:27.196 *** 2025-09-10 00:56:09.368094 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368099 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368103 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368108 | orchestrator | 2025-09-10 00:56:09.368113 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.368117 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:00.292) 0:09:27.488 *** 2025-09-10 00:56:09.368122 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368127 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368132 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368136 | orchestrator | 2025-09-10 00:56:09.368145 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.368151 | orchestrator | Wednesday 10 September 2025 00:54:13 +0000 (0:00:00.306) 0:09:27.794 *** 2025-09-10 00:56:09.368155 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368160 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368165 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368170 | orchestrator | 2025-09-10 00:56:09.368174 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.368179 | orchestrator | Wednesday 10 September 2025 00:54:13 +0000 (0:00:00.310) 0:09:28.105 *** 2025-09-10 00:56:09.368184 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368189 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368193 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368198 | orchestrator | 2025-09-10 00:56:09.368203 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.368207 | orchestrator | Wednesday 10 September 2025 00:54:14 +0000 (0:00:00.641) 0:09:28.747 *** 2025-09-10 00:56:09.368212 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368217 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368222 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368226 | orchestrator | 2025-09-10 00:56:09.368231 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.368236 | orchestrator | Wednesday 10 September 2025 00:54:14 +0000 (0:00:00.469) 0:09:29.216 *** 2025-09-10 00:56:09.368241 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368246 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368254 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368259 | orchestrator | 2025-09-10 00:56:09.368266 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.368273 | orchestrator | Wednesday 10 September 2025 00:54:14 +0000 (0:00:00.313) 0:09:29.529 *** 2025-09-10 00:56:09.368280 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368288 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368296 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368303 | orchestrator | 2025-09-10 00:56:09.368310 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.368322 | orchestrator | Wednesday 10 September 2025 00:54:15 +0000 (0:00:00.288) 0:09:29.818 *** 2025-09-10 00:56:09.368328 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368335 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368342 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368349 | orchestrator | 2025-09-10 00:56:09.368356 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.368363 | orchestrator | Wednesday 10 September 2025 00:54:15 +0000 (0:00:00.642) 0:09:30.461 *** 2025-09-10 00:56:09.368370 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368378 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368424 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368432 | orchestrator | 2025-09-10 00:56:09.368439 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.368446 | orchestrator | Wednesday 10 September 2025 00:54:16 +0000 (0:00:00.406) 0:09:30.867 *** 2025-09-10 00:56:09.368454 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368460 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368467 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368474 | orchestrator | 2025-09-10 00:56:09.368480 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-10 00:56:09.368486 | orchestrator | Wednesday 10 September 2025 00:54:16 +0000 (0:00:00.578) 0:09:31.446 *** 2025-09-10 00:56:09.368493 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368500 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368506 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-10 00:56:09.368513 | orchestrator | 2025-09-10 00:56:09.368520 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-10 00:56:09.368529 | orchestrator | Wednesday 10 September 2025 00:54:17 +0000 (0:00:00.659) 0:09:32.105 *** 2025-09-10 00:56:09.368534 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.368539 | orchestrator | 2025-09-10 00:56:09.368543 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-10 00:56:09.368548 | orchestrator | Wednesday 10 September 2025 00:54:20 +0000 (0:00:02.587) 0:09:34.693 *** 2025-09-10 00:56:09.368553 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-10 00:56:09.368558 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368563 | orchestrator | 2025-09-10 00:56:09.368567 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-10 00:56:09.368572 | orchestrator | Wednesday 10 September 2025 00:54:20 +0000 (0:00:00.230) 0:09:34.923 *** 2025-09-10 00:56:09.368577 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:56:09.368587 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:56:09.368597 | orchestrator | 2025-09-10 00:56:09.368601 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-10 00:56:09.368606 | orchestrator | Wednesday 10 September 2025 00:54:29 +0000 (0:00:08.796) 0:09:43.720 *** 2025-09-10 00:56:09.368610 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 00:56:09.368615 | orchestrator | 2025-09-10 00:56:09.368623 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-10 00:56:09.368628 | orchestrator | Wednesday 10 September 2025 00:54:32 +0000 (0:00:03.742) 0:09:47.462 *** 2025-09-10 00:56:09.368632 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.368638 | orchestrator | 2025-09-10 00:56:09.368642 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-10 00:56:09.368646 | orchestrator | Wednesday 10 September 2025 00:54:33 +0000 (0:00:00.785) 0:09:48.248 *** 2025-09-10 00:56:09.368651 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-10 00:56:09.368655 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-10 00:56:09.368660 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-10 00:56:09.368664 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-10 00:56:09.368669 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-10 00:56:09.368673 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-10 00:56:09.368677 | orchestrator | 2025-09-10 00:56:09.368682 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-10 00:56:09.368686 | orchestrator | Wednesday 10 September 2025 00:54:34 +0000 (0:00:01.043) 0:09:49.291 *** 2025-09-10 00:56:09.368691 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.368695 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-10 00:56:09.368700 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:56:09.368704 | orchestrator | 2025-09-10 00:56:09.368709 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-10 00:56:09.368713 | orchestrator | Wednesday 10 September 2025 00:54:36 +0000 (0:00:02.247) 0:09:51.539 *** 2025-09-10 00:56:09.368718 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 00:56:09.368725 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-10 00:56:09.368730 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.368734 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 00:56:09.368738 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-10 00:56:09.368743 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.368747 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 00:56:09.368752 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-10 00:56:09.368756 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.368761 | orchestrator | 2025-09-10 00:56:09.368765 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-10 00:56:09.368770 | orchestrator | Wednesday 10 September 2025 00:54:38 +0000 (0:00:01.170) 0:09:52.709 *** 2025-09-10 00:56:09.368774 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.368779 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.368783 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.368788 | orchestrator | 2025-09-10 00:56:09.368792 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-10 00:56:09.368797 | orchestrator | Wednesday 10 September 2025 00:54:40 +0000 (0:00:02.598) 0:09:55.307 *** 2025-09-10 00:56:09.368801 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.368805 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.368810 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.368818 | orchestrator | 2025-09-10 00:56:09.368822 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-10 00:56:09.368827 | orchestrator | Wednesday 10 September 2025 00:54:41 +0000 (0:00:00.617) 0:09:55.924 *** 2025-09-10 00:56:09.368831 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.368836 | orchestrator | 2025-09-10 00:56:09.368840 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-10 00:56:09.368845 | orchestrator | Wednesday 10 September 2025 00:54:41 +0000 (0:00:00.526) 0:09:56.451 *** 2025-09-10 00:56:09.368849 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.368853 | orchestrator | 2025-09-10 00:56:09.368858 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-10 00:56:09.368862 | orchestrator | Wednesday 10 September 2025 00:54:42 +0000 (0:00:00.761) 0:09:57.212 *** 2025-09-10 00:56:09.368867 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.368871 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.368876 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.368880 | orchestrator | 2025-09-10 00:56:09.368885 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-10 00:56:09.368889 | orchestrator | Wednesday 10 September 2025 00:54:43 +0000 (0:00:01.370) 0:09:58.583 *** 2025-09-10 00:56:09.368894 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.368898 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.368903 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.368907 | orchestrator | 2025-09-10 00:56:09.368911 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-10 00:56:09.368916 | orchestrator | Wednesday 10 September 2025 00:54:45 +0000 (0:00:01.156) 0:09:59.740 *** 2025-09-10 00:56:09.368920 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.368925 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.368929 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.368934 | orchestrator | 2025-09-10 00:56:09.368938 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-10 00:56:09.368943 | orchestrator | Wednesday 10 September 2025 00:54:46 +0000 (0:00:01.744) 0:10:01.484 *** 2025-09-10 00:56:09.368947 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.368952 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.368956 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.368961 | orchestrator | 2025-09-10 00:56:09.368968 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-10 00:56:09.368972 | orchestrator | Wednesday 10 September 2025 00:54:49 +0000 (0:00:02.223) 0:10:03.707 *** 2025-09-10 00:56:09.368977 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.368981 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.368986 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.368990 | orchestrator | 2025-09-10 00:56:09.368995 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-10 00:56:09.368999 | orchestrator | Wednesday 10 September 2025 00:54:50 +0000 (0:00:01.258) 0:10:04.965 *** 2025-09-10 00:56:09.369004 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.369008 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.369013 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.369017 | orchestrator | 2025-09-10 00:56:09.369022 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-10 00:56:09.369026 | orchestrator | Wednesday 10 September 2025 00:54:51 +0000 (0:00:00.942) 0:10:05.908 *** 2025-09-10 00:56:09.369031 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.369035 | orchestrator | 2025-09-10 00:56:09.369040 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-10 00:56:09.369044 | orchestrator | Wednesday 10 September 2025 00:54:51 +0000 (0:00:00.529) 0:10:06.438 *** 2025-09-10 00:56:09.369055 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369059 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369064 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369068 | orchestrator | 2025-09-10 00:56:09.369073 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-10 00:56:09.369077 | orchestrator | Wednesday 10 September 2025 00:54:52 +0000 (0:00:00.315) 0:10:06.753 *** 2025-09-10 00:56:09.369082 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.369086 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.369091 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.369095 | orchestrator | 2025-09-10 00:56:09.369100 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-10 00:56:09.369107 | orchestrator | Wednesday 10 September 2025 00:54:53 +0000 (0:00:01.450) 0:10:08.204 *** 2025-09-10 00:56:09.369111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.369116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.369120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.369125 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369129 | orchestrator | 2025-09-10 00:56:09.369134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-10 00:56:09.369138 | orchestrator | Wednesday 10 September 2025 00:54:54 +0000 (0:00:00.642) 0:10:08.846 *** 2025-09-10 00:56:09.369143 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369148 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369152 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369156 | orchestrator | 2025-09-10 00:56:09.369161 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-10 00:56:09.369165 | orchestrator | 2025-09-10 00:56:09.369170 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-10 00:56:09.369174 | orchestrator | Wednesday 10 September 2025 00:54:54 +0000 (0:00:00.561) 0:10:09.407 *** 2025-09-10 00:56:09.369179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.369184 | orchestrator | 2025-09-10 00:56:09.369188 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-10 00:56:09.369193 | orchestrator | Wednesday 10 September 2025 00:54:55 +0000 (0:00:00.803) 0:10:10.210 *** 2025-09-10 00:56:09.369197 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.369202 | orchestrator | 2025-09-10 00:56:09.369206 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-10 00:56:09.369211 | orchestrator | Wednesday 10 September 2025 00:54:56 +0000 (0:00:00.547) 0:10:10.758 *** 2025-09-10 00:56:09.369215 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369220 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369224 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369229 | orchestrator | 2025-09-10 00:56:09.369233 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-10 00:56:09.369238 | orchestrator | Wednesday 10 September 2025 00:54:56 +0000 (0:00:00.542) 0:10:11.301 *** 2025-09-10 00:56:09.369242 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369247 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369251 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369256 | orchestrator | 2025-09-10 00:56:09.369260 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-10 00:56:09.369264 | orchestrator | Wednesday 10 September 2025 00:54:57 +0000 (0:00:00.698) 0:10:12.000 *** 2025-09-10 00:56:09.369269 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369273 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369278 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369282 | orchestrator | 2025-09-10 00:56:09.369287 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-10 00:56:09.369295 | orchestrator | Wednesday 10 September 2025 00:54:58 +0000 (0:00:00.688) 0:10:12.688 *** 2025-09-10 00:56:09.369299 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369304 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369308 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369313 | orchestrator | 2025-09-10 00:56:09.369317 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-10 00:56:09.369322 | orchestrator | Wednesday 10 September 2025 00:54:58 +0000 (0:00:00.749) 0:10:13.438 *** 2025-09-10 00:56:09.369326 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369331 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369335 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369340 | orchestrator | 2025-09-10 00:56:09.369344 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-10 00:56:09.369351 | orchestrator | Wednesday 10 September 2025 00:54:59 +0000 (0:00:00.591) 0:10:14.029 *** 2025-09-10 00:56:09.369356 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369360 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369365 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369369 | orchestrator | 2025-09-10 00:56:09.369374 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-10 00:56:09.369378 | orchestrator | Wednesday 10 September 2025 00:54:59 +0000 (0:00:00.312) 0:10:14.342 *** 2025-09-10 00:56:09.369396 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369401 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369406 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369410 | orchestrator | 2025-09-10 00:56:09.369415 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-10 00:56:09.369419 | orchestrator | Wednesday 10 September 2025 00:54:59 +0000 (0:00:00.284) 0:10:14.626 *** 2025-09-10 00:56:09.369424 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369428 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369433 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369437 | orchestrator | 2025-09-10 00:56:09.369442 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-10 00:56:09.369446 | orchestrator | Wednesday 10 September 2025 00:55:00 +0000 (0:00:00.701) 0:10:15.328 *** 2025-09-10 00:56:09.369451 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369455 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369460 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369464 | orchestrator | 2025-09-10 00:56:09.369469 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-10 00:56:09.369473 | orchestrator | Wednesday 10 September 2025 00:55:01 +0000 (0:00:00.984) 0:10:16.312 *** 2025-09-10 00:56:09.369478 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369483 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369487 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369491 | orchestrator | 2025-09-10 00:56:09.369496 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-10 00:56:09.369500 | orchestrator | Wednesday 10 September 2025 00:55:01 +0000 (0:00:00.302) 0:10:16.615 *** 2025-09-10 00:56:09.369505 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369509 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369517 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369522 | orchestrator | 2025-09-10 00:56:09.369526 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-10 00:56:09.369531 | orchestrator | Wednesday 10 September 2025 00:55:02 +0000 (0:00:00.289) 0:10:16.904 *** 2025-09-10 00:56:09.369535 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369540 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369544 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369549 | orchestrator | 2025-09-10 00:56:09.369553 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-10 00:56:09.369558 | orchestrator | Wednesday 10 September 2025 00:55:02 +0000 (0:00:00.331) 0:10:17.236 *** 2025-09-10 00:56:09.369566 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369570 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369575 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369579 | orchestrator | 2025-09-10 00:56:09.369584 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-10 00:56:09.369588 | orchestrator | Wednesday 10 September 2025 00:55:03 +0000 (0:00:00.574) 0:10:17.811 *** 2025-09-10 00:56:09.369593 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369597 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369602 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369607 | orchestrator | 2025-09-10 00:56:09.369611 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-10 00:56:09.369616 | orchestrator | Wednesday 10 September 2025 00:55:03 +0000 (0:00:00.337) 0:10:18.148 *** 2025-09-10 00:56:09.369620 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369625 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369629 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369634 | orchestrator | 2025-09-10 00:56:09.369638 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-10 00:56:09.369643 | orchestrator | Wednesday 10 September 2025 00:55:03 +0000 (0:00:00.305) 0:10:18.454 *** 2025-09-10 00:56:09.369647 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369652 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369656 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369661 | orchestrator | 2025-09-10 00:56:09.369665 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-10 00:56:09.369670 | orchestrator | Wednesday 10 September 2025 00:55:04 +0000 (0:00:00.317) 0:10:18.772 *** 2025-09-10 00:56:09.369674 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369679 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369683 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369688 | orchestrator | 2025-09-10 00:56:09.369692 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-10 00:56:09.369697 | orchestrator | Wednesday 10 September 2025 00:55:04 +0000 (0:00:00.581) 0:10:19.353 *** 2025-09-10 00:56:09.369702 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369706 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369711 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369715 | orchestrator | 2025-09-10 00:56:09.369720 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-10 00:56:09.369724 | orchestrator | Wednesday 10 September 2025 00:55:05 +0000 (0:00:00.347) 0:10:19.701 *** 2025-09-10 00:56:09.369729 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.369733 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.369737 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.369742 | orchestrator | 2025-09-10 00:56:09.369746 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-10 00:56:09.369751 | orchestrator | Wednesday 10 September 2025 00:55:05 +0000 (0:00:00.594) 0:10:20.295 *** 2025-09-10 00:56:09.369755 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.369760 | orchestrator | 2025-09-10 00:56:09.369764 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-10 00:56:09.369769 | orchestrator | Wednesday 10 September 2025 00:55:06 +0000 (0:00:00.836) 0:10:21.131 *** 2025-09-10 00:56:09.369776 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.369781 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-10 00:56:09.369785 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:56:09.369790 | orchestrator | 2025-09-10 00:56:09.369795 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-10 00:56:09.369799 | orchestrator | Wednesday 10 September 2025 00:55:08 +0000 (0:00:02.166) 0:10:23.298 *** 2025-09-10 00:56:09.369803 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 00:56:09.369811 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-10 00:56:09.369816 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.369820 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 00:56:09.369825 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-10 00:56:09.369829 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.369834 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 00:56:09.369838 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-10 00:56:09.369843 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.369847 | orchestrator | 2025-09-10 00:56:09.369852 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-10 00:56:09.369856 | orchestrator | Wednesday 10 September 2025 00:55:09 +0000 (0:00:01.231) 0:10:24.530 *** 2025-09-10 00:56:09.369861 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.369865 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.369870 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.369874 | orchestrator | 2025-09-10 00:56:09.369879 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-10 00:56:09.369883 | orchestrator | Wednesday 10 September 2025 00:55:10 +0000 (0:00:00.316) 0:10:24.846 *** 2025-09-10 00:56:09.369888 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.369892 | orchestrator | 2025-09-10 00:56:09.369897 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-10 00:56:09.369904 | orchestrator | Wednesday 10 September 2025 00:55:10 +0000 (0:00:00.776) 0:10:25.622 *** 2025-09-10 00:56:09.369908 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.369913 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.369918 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.369922 | orchestrator | 2025-09-10 00:56:09.369927 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-10 00:56:09.369931 | orchestrator | Wednesday 10 September 2025 00:55:11 +0000 (0:00:00.814) 0:10:26.436 *** 2025-09-10 00:56:09.369936 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.369941 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-10 00:56:09.369945 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.369950 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-10 00:56:09.369954 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.369959 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-10 00:56:09.369963 | orchestrator | 2025-09-10 00:56:09.369968 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-10 00:56:09.369972 | orchestrator | Wednesday 10 September 2025 00:55:16 +0000 (0:00:05.088) 0:10:31.525 *** 2025-09-10 00:56:09.369977 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.369981 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:56:09.369986 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.369990 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:56:09.369998 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:56:09.370002 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:56:09.370007 | orchestrator | 2025-09-10 00:56:09.370011 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-10 00:56:09.370038 | orchestrator | Wednesday 10 September 2025 00:55:19 +0000 (0:00:02.800) 0:10:34.325 *** 2025-09-10 00:56:09.370043 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 00:56:09.370048 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.370053 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 00:56:09.370057 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.370061 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 00:56:09.370066 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.370070 | orchestrator | 2025-09-10 00:56:09.370075 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-10 00:56:09.370079 | orchestrator | Wednesday 10 September 2025 00:55:20 +0000 (0:00:01.204) 0:10:35.530 *** 2025-09-10 00:56:09.370087 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-10 00:56:09.370091 | orchestrator | 2025-09-10 00:56:09.370096 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-10 00:56:09.370100 | orchestrator | Wednesday 10 September 2025 00:55:21 +0000 (0:00:00.241) 0:10:35.772 *** 2025-09-10 00:56:09.370105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370129 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370133 | orchestrator | 2025-09-10 00:56:09.370138 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-10 00:56:09.370142 | orchestrator | Wednesday 10 September 2025 00:55:21 +0000 (0:00:00.698) 0:10:36.470 *** 2025-09-10 00:56:09.370147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-10 00:56:09.370172 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370177 | orchestrator | 2025-09-10 00:56:09.370181 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-10 00:56:09.370186 | orchestrator | Wednesday 10 September 2025 00:55:22 +0000 (0:00:00.700) 0:10:37.171 *** 2025-09-10 00:56:09.370191 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-10 00:56:09.370195 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-10 00:56:09.370204 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-10 00:56:09.370209 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-10 00:56:09.370213 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-10 00:56:09.370218 | orchestrator | 2025-09-10 00:56:09.370222 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-10 00:56:09.370227 | orchestrator | Wednesday 10 September 2025 00:55:54 +0000 (0:00:32.155) 0:11:09.327 *** 2025-09-10 00:56:09.370232 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370236 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.370241 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.370245 | orchestrator | 2025-09-10 00:56:09.370250 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-10 00:56:09.370254 | orchestrator | Wednesday 10 September 2025 00:55:55 +0000 (0:00:00.345) 0:11:09.673 *** 2025-09-10 00:56:09.370259 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370264 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.370268 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.370273 | orchestrator | 2025-09-10 00:56:09.370277 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-10 00:56:09.370282 | orchestrator | Wednesday 10 September 2025 00:55:55 +0000 (0:00:00.628) 0:11:10.301 *** 2025-09-10 00:56:09.370286 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.370291 | orchestrator | 2025-09-10 00:56:09.370295 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-10 00:56:09.370300 | orchestrator | Wednesday 10 September 2025 00:55:56 +0000 (0:00:00.550) 0:11:10.851 *** 2025-09-10 00:56:09.370304 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.370309 | orchestrator | 2025-09-10 00:56:09.370313 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-10 00:56:09.370318 | orchestrator | Wednesday 10 September 2025 00:55:56 +0000 (0:00:00.766) 0:11:11.618 *** 2025-09-10 00:56:09.370325 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.370329 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.370334 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.370338 | orchestrator | 2025-09-10 00:56:09.370343 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-10 00:56:09.370347 | orchestrator | Wednesday 10 September 2025 00:55:58 +0000 (0:00:01.297) 0:11:12.915 *** 2025-09-10 00:56:09.370352 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.370356 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.370361 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.370365 | orchestrator | 2025-09-10 00:56:09.370370 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-10 00:56:09.370375 | orchestrator | Wednesday 10 September 2025 00:55:59 +0000 (0:00:01.181) 0:11:14.097 *** 2025-09-10 00:56:09.370379 | orchestrator | changed: [testbed-node-3] 2025-09-10 00:56:09.370396 | orchestrator | changed: [testbed-node-4] 2025-09-10 00:56:09.370400 | orchestrator | changed: [testbed-node-5] 2025-09-10 00:56:09.370405 | orchestrator | 2025-09-10 00:56:09.370409 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-10 00:56:09.370414 | orchestrator | Wednesday 10 September 2025 00:56:01 +0000 (0:00:01.830) 0:11:15.928 *** 2025-09-10 00:56:09.370418 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.370426 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.370431 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-10 00:56:09.370435 | orchestrator | 2025-09-10 00:56:09.370440 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-10 00:56:09.370444 | orchestrator | Wednesday 10 September 2025 00:56:04 +0000 (0:00:03.167) 0:11:19.095 *** 2025-09-10 00:56:09.370449 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370456 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.370461 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.370465 | orchestrator | 2025-09-10 00:56:09.370470 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-10 00:56:09.370474 | orchestrator | Wednesday 10 September 2025 00:56:04 +0000 (0:00:00.365) 0:11:19.460 *** 2025-09-10 00:56:09.370479 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:56:09.370483 | orchestrator | 2025-09-10 00:56:09.370488 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-10 00:56:09.370492 | orchestrator | Wednesday 10 September 2025 00:56:05 +0000 (0:00:00.929) 0:11:20.390 *** 2025-09-10 00:56:09.370497 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.370501 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.370506 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.370510 | orchestrator | 2025-09-10 00:56:09.370515 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-10 00:56:09.370519 | orchestrator | Wednesday 10 September 2025 00:56:06 +0000 (0:00:00.311) 0:11:20.702 *** 2025-09-10 00:56:09.370524 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370528 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:56:09.370533 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:56:09.370537 | orchestrator | 2025-09-10 00:56:09.370542 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-10 00:56:09.370546 | orchestrator | Wednesday 10 September 2025 00:56:06 +0000 (0:00:00.345) 0:11:21.047 *** 2025-09-10 00:56:09.370551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:56:09.370555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:56:09.370560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:56:09.370564 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:56:09.370569 | orchestrator | 2025-09-10 00:56:09.370574 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-10 00:56:09.370578 | orchestrator | Wednesday 10 September 2025 00:56:07 +0000 (0:00:01.273) 0:11:22.321 *** 2025-09-10 00:56:09.370583 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:56:09.370587 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:56:09.370592 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:56:09.370596 | orchestrator | 2025-09-10 00:56:09.370600 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:56:09.370605 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-10 00:56:09.370610 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-10 00:56:09.370614 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-10 00:56:09.370619 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-10 00:56:09.370626 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-10 00:56:09.370631 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-10 00:56:09.370635 | orchestrator | 2025-09-10 00:56:09.370640 | orchestrator | 2025-09-10 00:56:09.370644 | orchestrator | 2025-09-10 00:56:09.370651 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:56:09.370656 | orchestrator | Wednesday 10 September 2025 00:56:07 +0000 (0:00:00.245) 0:11:22.567 *** 2025-09-10 00:56:09.370660 | orchestrator | =============================================================================== 2025-09-10 00:56:09.370665 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 61.28s 2025-09-10 00:56:09.370669 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.51s 2025-09-10 00:56:09.370674 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.16s 2025-09-10 00:56:09.370678 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.72s 2025-09-10 00:56:09.370683 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.89s 2025-09-10 00:56:09.370687 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.95s 2025-09-10 00:56:09.370692 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.97s 2025-09-10 00:56:09.370696 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.83s 2025-09-10 00:56:09.370701 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.80s 2025-09-10 00:56:09.370705 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.28s 2025-09-10 00:56:09.370710 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.96s 2025-09-10 00:56:09.370714 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.36s 2025-09-10 00:56:09.370718 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.09s 2025-09-10 00:56:09.370723 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2025-09-10 00:56:09.370727 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.16s 2025-09-10 00:56:09.370734 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.06s 2025-09-10 00:56:09.370739 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2025-09-10 00:56:09.370743 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.61s 2025-09-10 00:56:09.370748 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.57s 2025-09-10 00:56:09.370752 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2025-09-10 00:56:09.370757 | orchestrator | 2025-09-10 00:56:09 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:09.370761 | orchestrator | 2025-09-10 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:12.395995 | orchestrator | 2025-09-10 00:56:12 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:12.398691 | orchestrator | 2025-09-10 00:56:12 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:12.401951 | orchestrator | 2025-09-10 00:56:12 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:12.401973 | orchestrator | 2025-09-10 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:15.453726 | orchestrator | 2025-09-10 00:56:15 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:15.455672 | orchestrator | 2025-09-10 00:56:15 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:15.457249 | orchestrator | 2025-09-10 00:56:15 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:15.457446 | orchestrator | 2025-09-10 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:18.498706 | orchestrator | 2025-09-10 00:56:18 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:18.500205 | orchestrator | 2025-09-10 00:56:18 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:18.502775 | orchestrator | 2025-09-10 00:56:18 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:18.502796 | orchestrator | 2025-09-10 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:21.549045 | orchestrator | 2025-09-10 00:56:21 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:21.551610 | orchestrator | 2025-09-10 00:56:21 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:21.553661 | orchestrator | 2025-09-10 00:56:21 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:21.554721 | orchestrator | 2025-09-10 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:24.604084 | orchestrator | 2025-09-10 00:56:24 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:24.605766 | orchestrator | 2025-09-10 00:56:24 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:24.608115 | orchestrator | 2025-09-10 00:56:24 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:24.608799 | orchestrator | 2025-09-10 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:27.652708 | orchestrator | 2025-09-10 00:56:27 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:27.654235 | orchestrator | 2025-09-10 00:56:27 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:27.655617 | orchestrator | 2025-09-10 00:56:27 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:27.655860 | orchestrator | 2025-09-10 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:30.695158 | orchestrator | 2025-09-10 00:56:30 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:30.698991 | orchestrator | 2025-09-10 00:56:30 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:30.700601 | orchestrator | 2025-09-10 00:56:30 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:30.700635 | orchestrator | 2025-09-10 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:33.736589 | orchestrator | 2025-09-10 00:56:33 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:33.737307 | orchestrator | 2025-09-10 00:56:33 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:33.740176 | orchestrator | 2025-09-10 00:56:33 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:33.740256 | orchestrator | 2025-09-10 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:36.798992 | orchestrator | 2025-09-10 00:56:36 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:36.800592 | orchestrator | 2025-09-10 00:56:36 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:36.802707 | orchestrator | 2025-09-10 00:56:36 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:36.802765 | orchestrator | 2025-09-10 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:39.848981 | orchestrator | 2025-09-10 00:56:39 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:39.851685 | orchestrator | 2025-09-10 00:56:39 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:39.854700 | orchestrator | 2025-09-10 00:56:39 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:39.855561 | orchestrator | 2025-09-10 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:42.911983 | orchestrator | 2025-09-10 00:56:42 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:42.914377 | orchestrator | 2025-09-10 00:56:42 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:42.916603 | orchestrator | 2025-09-10 00:56:42 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:42.916631 | orchestrator | 2025-09-10 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:45.971443 | orchestrator | 2025-09-10 00:56:45 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state STARTED 2025-09-10 00:56:45.972826 | orchestrator | 2025-09-10 00:56:45 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:45.974595 | orchestrator | 2025-09-10 00:56:45 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:45.974774 | orchestrator | 2025-09-10 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:49.027818 | orchestrator | 2025-09-10 00:56:49 | INFO  | Task bee789ca-c824-40fe-8a58-e8ac5e216152 is in state SUCCESS 2025-09-10 00:56:49.029151 | orchestrator | 2025-09-10 00:56:49.029243 | orchestrator | 2025-09-10 00:56:49.029261 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:56:49.029276 | orchestrator | 2025-09-10 00:56:49.029290 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:56:49.029305 | orchestrator | Wednesday 10 September 2025 00:53:53 +0000 (0:00:00.284) 0:00:00.284 *** 2025-09-10 00:56:49.029319 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:49.029334 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:56:49.029347 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:56:49.029360 | orchestrator | 2025-09-10 00:56:49.029374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:56:49.029387 | orchestrator | Wednesday 10 September 2025 00:53:53 +0000 (0:00:00.276) 0:00:00.561 *** 2025-09-10 00:56:49.029401 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-10 00:56:49.029415 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-10 00:56:49.029428 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-10 00:56:49.029465 | orchestrator | 2025-09-10 00:56:49.029479 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-10 00:56:49.029514 | orchestrator | 2025-09-10 00:56:49.029527 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-10 00:56:49.029541 | orchestrator | Wednesday 10 September 2025 00:53:53 +0000 (0:00:00.353) 0:00:00.915 *** 2025-09-10 00:56:49.029554 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:49.029569 | orchestrator | 2025-09-10 00:56:49.029584 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-10 00:56:49.029598 | orchestrator | Wednesday 10 September 2025 00:53:54 +0000 (0:00:00.410) 0:00:01.325 *** 2025-09-10 00:56:49.029612 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-10 00:56:49.029656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-10 00:56:49.029668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-10 00:56:49.029680 | orchestrator | 2025-09-10 00:56:49.029692 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-10 00:56:49.029704 | orchestrator | Wednesday 10 September 2025 00:53:54 +0000 (0:00:00.597) 0:00:01.923 *** 2025-09-10 00:56:49.029737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.029755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.029790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.029808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.029834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.029896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.029911 | orchestrator | 2025-09-10 00:56:49.029923 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-10 00:56:49.029936 | orchestrator | Wednesday 10 September 2025 00:53:56 +0000 (0:00:01.657) 0:00:03.580 *** 2025-09-10 00:56:49.029948 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:49.029961 | orchestrator | 2025-09-10 00:56:49.029973 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-10 00:56:49.029985 | orchestrator | Wednesday 10 September 2025 00:53:56 +0000 (0:00:00.463) 0:00:04.044 *** 2025-09-10 00:56:49.030009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.030110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.030180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.030195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.030218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.030233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.030254 | orchestrator | 2025-09-10 00:56:49.030268 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-10 00:56:49.030281 | orchestrator | Wednesday 10 September 2025 00:53:59 +0000 (0:00:02.539) 0:00:06.584 *** 2025-09-10 00:56:49.030300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:56:49.030313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:56:49.030327 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:49.030340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:56:49.030363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:56:49.030384 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:49.030398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:56:49.030417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:56:49.030431 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:49.030444 | orchestrator | 2025-09-10 00:56:49.030458 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-10 00:56:49.030471 | orchestrator | Wednesday 10 September 2025 00:54:00 +0000 (0:00:01.089) 0:00:07.673 *** 2025-09-10 00:56:49.030513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:56:49.030538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:56:49.030559 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:49.030572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:56:49.030593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-10 00:56:49.030607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:56:49.030620 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:49.030640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-10 00:56:49.030673 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:49.030686 | orchestrator | 2025-09-10 00:56:49.030698 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-10 00:56:49.030710 | orchestrator | Wednesday 10 September 2025 00:54:01 +0000 (0:00:01.356) 0:00:09.029 *** 2025-09-10 00:56:49.030722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.030740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.030753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.030774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.030799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.030821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.030835 | orchestrator | 2025-09-10 00:56:49.030857 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-10 00:56:49.030900 | orchestrator | Wednesday 10 September 2025 00:54:04 +0000 (0:00:02.529) 0:00:11.559 *** 2025-09-10 00:56:49.030937 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:49.030967 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:49.030979 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:49.031001 | orchestrator | 2025-09-10 00:56:49.031031 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-10 00:56:49.031093 | orchestrator | Wednesday 10 September 2025 00:54:07 +0000 (0:00:03.219) 0:00:14.779 *** 2025-09-10 00:56:49.031135 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:49.031186 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:49.031207 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:49.031220 | orchestrator | 2025-09-10 00:56:49.031275 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-10 00:56:49.031300 | orchestrator | Wednesday 10 September 2025 00:54:09 +0000 (0:00:01.919) 0:00:16.699 *** 2025-09-10 00:56:49.031341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.031371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.031385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-10 00:56:49.031434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.031475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.031559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-10 00:56:49.031574 | orchestrator | 2025-09-10 00:56:49.031587 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-10 00:56:49.031599 | orchestrator | Wednesday 10 September 2025 00:54:11 +0000 (0:00:02.552) 0:00:19.251 *** 2025-09-10 00:56:49.031611 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:49.031623 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:56:49.031635 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:56:49.031647 | orchestrator | 2025-09-10 00:56:49.031659 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-10 00:56:49.031672 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:00.285) 0:00:19.537 *** 2025-09-10 00:56:49.031684 | orchestrator | 2025-09-10 00:56:49.031696 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-10 00:56:49.031708 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:00.075) 0:00:19.612 *** 2025-09-10 00:56:49.031719 | orchestrator | 2025-09-10 00:56:49.031735 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-10 00:56:49.031763 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:00.063) 0:00:19.676 *** 2025-09-10 00:56:49.031775 | orchestrator | 2025-09-10 00:56:49.031787 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-10 00:56:49.031799 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:00.079) 0:00:19.756 *** 2025-09-10 00:56:49.031813 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:49.031826 | orchestrator | 2025-09-10 00:56:49.031839 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-10 00:56:49.031850 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:00.192) 0:00:19.948 *** 2025-09-10 00:56:49.031869 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:56:49.031881 | orchestrator | 2025-09-10 00:56:49.031893 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-10 00:56:49.031905 | orchestrator | Wednesday 10 September 2025 00:54:13 +0000 (0:00:00.709) 0:00:20.658 *** 2025-09-10 00:56:49.031917 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:49.031935 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:49.031947 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:49.031959 | orchestrator | 2025-09-10 00:56:49.031972 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-10 00:56:49.031984 | orchestrator | Wednesday 10 September 2025 00:55:14 +0000 (0:01:01.017) 0:01:21.675 *** 2025-09-10 00:56:49.032008 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:49.032019 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:56:49.032032 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:56:49.032042 | orchestrator | 2025-09-10 00:56:49.032052 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-10 00:56:49.032062 | orchestrator | Wednesday 10 September 2025 00:56:35 +0000 (0:01:20.702) 0:02:42.377 *** 2025-09-10 00:56:49.032071 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:56:49.032081 | orchestrator | 2025-09-10 00:56:49.032091 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-10 00:56:49.032101 | orchestrator | Wednesday 10 September 2025 00:56:35 +0000 (0:00:00.522) 0:02:42.900 *** 2025-09-10 00:56:49.032112 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:49.032122 | orchestrator | 2025-09-10 00:56:49.032132 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-10 00:56:49.032141 | orchestrator | Wednesday 10 September 2025 00:56:38 +0000 (0:00:02.953) 0:02:45.853 *** 2025-09-10 00:56:49.032151 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:56:49.032163 | orchestrator | 2025-09-10 00:56:49.032173 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-10 00:56:49.032182 | orchestrator | Wednesday 10 September 2025 00:56:40 +0000 (0:00:02.157) 0:02:48.011 *** 2025-09-10 00:56:49.032192 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:49.032201 | orchestrator | 2025-09-10 00:56:49.032211 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-10 00:56:49.032221 | orchestrator | Wednesday 10 September 2025 00:56:43 +0000 (0:00:02.640) 0:02:50.651 *** 2025-09-10 00:56:49.032231 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:56:49.032241 | orchestrator | 2025-09-10 00:56:49.032251 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:56:49.032263 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 00:56:49.032274 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:56:49.032284 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-10 00:56:49.032294 | orchestrator | 2025-09-10 00:56:49.032304 | orchestrator | 2025-09-10 00:56:49.032314 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:56:49.032330 | orchestrator | Wednesday 10 September 2025 00:56:45 +0000 (0:00:02.511) 0:02:53.163 *** 2025-09-10 00:56:49.032340 | orchestrator | =============================================================================== 2025-09-10 00:56:49.032350 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.70s 2025-09-10 00:56:49.032360 | orchestrator | opensearch : Restart opensearch container ------------------------------ 61.02s 2025-09-10 00:56:49.032370 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.22s 2025-09-10 00:56:49.032380 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.95s 2025-09-10 00:56:49.032391 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.64s 2025-09-10 00:56:49.032400 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.55s 2025-09-10 00:56:49.032410 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.54s 2025-09-10 00:56:49.032420 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.53s 2025-09-10 00:56:49.032430 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.51s 2025-09-10 00:56:49.032440 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.16s 2025-09-10 00:56:49.032457 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.92s 2025-09-10 00:56:49.032467 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.66s 2025-09-10 00:56:49.032477 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.36s 2025-09-10 00:56:49.032507 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.09s 2025-09-10 00:56:49.032518 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.71s 2025-09-10 00:56:49.032528 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.60s 2025-09-10 00:56:49.032539 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-10 00:56:49.032549 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-09-10 00:56:49.032559 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.41s 2025-09-10 00:56:49.032569 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-09-10 00:56:49.032579 | orchestrator | 2025-09-10 00:56:49 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:49.032590 | orchestrator | 2025-09-10 00:56:49 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:49.032605 | orchestrator | 2025-09-10 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:52.077895 | orchestrator | 2025-09-10 00:56:52 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:52.086098 | orchestrator | 2025-09-10 00:56:52 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:52.086132 | orchestrator | 2025-09-10 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:55.140814 | orchestrator | 2025-09-10 00:56:55 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:55.142071 | orchestrator | 2025-09-10 00:56:55 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:55.142119 | orchestrator | 2025-09-10 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:56:58.189459 | orchestrator | 2025-09-10 00:56:58 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:56:58.191253 | orchestrator | 2025-09-10 00:56:58 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:56:58.191284 | orchestrator | 2025-09-10 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:01.237463 | orchestrator | 2025-09-10 00:57:01 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:01.238870 | orchestrator | 2025-09-10 00:57:01 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:57:01.238906 | orchestrator | 2025-09-10 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:04.289782 | orchestrator | 2025-09-10 00:57:04 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:04.289891 | orchestrator | 2025-09-10 00:57:04 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state STARTED 2025-09-10 00:57:04.289908 | orchestrator | 2025-09-10 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:07.330923 | orchestrator | 2025-09-10 00:57:07 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:07.332074 | orchestrator | 2025-09-10 00:57:07 | INFO  | Task 117b03f8-6775-47b8-989a-3c5f4008eab9 is in state SUCCESS 2025-09-10 00:57:07.334385 | orchestrator | 2025-09-10 00:57:07.334431 | orchestrator | 2025-09-10 00:57:07.334443 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-10 00:57:07.334478 | orchestrator | 2025-09-10 00:57:07.334490 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-10 00:57:07.334501 | orchestrator | Wednesday 10 September 2025 00:53:52 +0000 (0:00:00.105) 0:00:00.105 *** 2025-09-10 00:57:07.334539 | orchestrator | ok: [localhost] => { 2025-09-10 00:57:07.334553 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-10 00:57:07.334564 | orchestrator | } 2025-09-10 00:57:07.334575 | orchestrator | 2025-09-10 00:57:07.334586 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-10 00:57:07.334597 | orchestrator | Wednesday 10 September 2025 00:53:52 +0000 (0:00:00.041) 0:00:00.147 *** 2025-09-10 00:57:07.334609 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-10 00:57:07.334621 | orchestrator | ...ignoring 2025-09-10 00:57:07.334632 | orchestrator | 2025-09-10 00:57:07.334643 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-10 00:57:07.334654 | orchestrator | Wednesday 10 September 2025 00:53:55 +0000 (0:00:02.701) 0:00:02.848 *** 2025-09-10 00:57:07.334665 | orchestrator | skipping: [localhost] 2025-09-10 00:57:07.334675 | orchestrator | 2025-09-10 00:57:07.334686 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-10 00:57:07.334697 | orchestrator | Wednesday 10 September 2025 00:53:55 +0000 (0:00:00.038) 0:00:02.886 *** 2025-09-10 00:57:07.334708 | orchestrator | ok: [localhost] 2025-09-10 00:57:07.334718 | orchestrator | 2025-09-10 00:57:07.334729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:57:07.334741 | orchestrator | 2025-09-10 00:57:07.334753 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:57:07.334764 | orchestrator | Wednesday 10 September 2025 00:53:55 +0000 (0:00:00.145) 0:00:03.032 *** 2025-09-10 00:57:07.334775 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.334785 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.334796 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.334807 | orchestrator | 2025-09-10 00:57:07.334818 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:57:07.334829 | orchestrator | Wednesday 10 September 2025 00:53:55 +0000 (0:00:00.278) 0:00:03.310 *** 2025-09-10 00:57:07.334839 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-10 00:57:07.334850 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-10 00:57:07.334861 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-10 00:57:07.334872 | orchestrator | 2025-09-10 00:57:07.334882 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-10 00:57:07.334893 | orchestrator | 2025-09-10 00:57:07.334904 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-10 00:57:07.334915 | orchestrator | Wednesday 10 September 2025 00:53:56 +0000 (0:00:00.458) 0:00:03.768 *** 2025-09-10 00:57:07.334933 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-10 00:57:07.334947 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-10 00:57:07.334959 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-10 00:57:07.334972 | orchestrator | 2025-09-10 00:57:07.334985 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-10 00:57:07.334998 | orchestrator | Wednesday 10 September 2025 00:53:56 +0000 (0:00:00.343) 0:00:04.111 *** 2025-09-10 00:57:07.335010 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:57:07.335023 | orchestrator | 2025-09-10 00:57:07.335036 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-10 00:57:07.335048 | orchestrator | Wednesday 10 September 2025 00:53:57 +0000 (0:00:00.546) 0:00:04.658 *** 2025-09-10 00:57:07.335083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.335116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.335131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.335152 | orchestrator | 2025-09-10 00:57:07.335171 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-10 00:57:07.335184 | orchestrator | Wednesday 10 September 2025 00:54:00 +0000 (0:00:02.959) 0:00:07.618 *** 2025-09-10 00:57:07.335197 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.335210 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.335223 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.335236 | orchestrator | 2025-09-10 00:57:07.335249 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-10 00:57:07.335262 | orchestrator | Wednesday 10 September 2025 00:54:00 +0000 (0:00:00.706) 0:00:08.324 *** 2025-09-10 00:57:07.335274 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.335287 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.335299 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.335309 | orchestrator | 2025-09-10 00:57:07.335320 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-10 00:57:07.335331 | orchestrator | Wednesday 10 September 2025 00:54:02 +0000 (0:00:01.875) 0:00:10.200 *** 2025-09-10 00:57:07.335347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.335377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.335390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.335402 | orchestrator | 2025-09-10 00:57:07.335413 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-10 00:57:07.335428 | orchestrator | Wednesday 10 September 2025 00:54:06 +0000 (0:00:03.988) 0:00:14.188 *** 2025-09-10 00:57:07.335445 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.335456 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.335467 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.335478 | orchestrator | 2025-09-10 00:57:07.335489 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-10 00:57:07.335500 | orchestrator | Wednesday 10 September 2025 00:54:08 +0000 (0:00:01.207) 0:00:15.396 *** 2025-09-10 00:57:07.335539 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.335551 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:57:07.335562 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:57:07.335573 | orchestrator | 2025-09-10 00:57:07.335583 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-10 00:57:07.335594 | orchestrator | Wednesday 10 September 2025 00:54:12 +0000 (0:00:04.797) 0:00:20.194 *** 2025-09-10 00:57:07.335605 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:57:07.335616 | orchestrator | 2025-09-10 00:57:07.335627 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-10 00:57:07.335638 | orchestrator | Wednesday 10 September 2025 00:54:13 +0000 (0:00:00.529) 0:00:20.724 *** 2025-09-10 00:57:07.335659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335672 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.335689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335707 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.335726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335738 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.335749 | orchestrator | 2025-09-10 00:57:07.335760 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-10 00:57:07.335771 | orchestrator | Wednesday 10 September 2025 00:54:17 +0000 (0:00:04.441) 0:00:25.165 *** 2025-09-10 00:57:07.335787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335806 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.335823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335836 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.335847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335865 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.335876 | orchestrator | 2025-09-10 00:57:07.335886 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-10 00:57:07.335902 | orchestrator | Wednesday 10 September 2025 00:54:20 +0000 (0:00:03.037) 0:00:28.203 *** 2025-09-10 00:57:07.335914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335926 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.335946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335964 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.335980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-10 00:57:07.335993 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.336003 | orchestrator | 2025-09-10 00:57:07.336014 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-10 00:57:07.336025 | orchestrator | Wednesday 10 September 2025 00:54:24 +0000 (0:00:03.306) 0:00:31.510 *** 2025-09-10 00:57:07.336045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.336068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.336090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-10 00:57:07.336108 | orchestrator | 2025-09-10 00:57:07.336120 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-10 00:57:07.336130 | orchestrator | Wednesday 10 September 2025 00:54:27 +0000 (0:00:03.480) 0:00:34.991 *** 2025-09-10 00:57:07.336141 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.336152 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:57:07.336163 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:57:07.336173 | orchestrator | 2025-09-10 00:57:07.336184 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-10 00:57:07.336195 | orchestrator | Wednesday 10 September 2025 00:54:28 +0000 (0:00:00.796) 0:00:35.787 *** 2025-09-10 00:57:07.336206 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.336216 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.336227 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.336238 | orchestrator | 2025-09-10 00:57:07.336248 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-10 00:57:07.336259 | orchestrator | Wednesday 10 September 2025 00:54:28 +0000 (0:00:00.546) 0:00:36.334 *** 2025-09-10 00:57:07.336270 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.336281 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.336291 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.336302 | orchestrator | 2025-09-10 00:57:07.336313 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-10 00:57:07.336323 | orchestrator | Wednesday 10 September 2025 00:54:29 +0000 (0:00:00.395) 0:00:36.730 *** 2025-09-10 00:57:07.336340 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-10 00:57:07.336351 | orchestrator | ...ignoring 2025-09-10 00:57:07.336362 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-10 00:57:07.336373 | orchestrator | ...ignoring 2025-09-10 00:57:07.336383 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-10 00:57:07.336394 | orchestrator | ...ignoring 2025-09-10 00:57:07.336405 | orchestrator | 2025-09-10 00:57:07.336415 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-10 00:57:07.336426 | orchestrator | Wednesday 10 September 2025 00:54:40 +0000 (0:00:10.922) 0:00:47.652 *** 2025-09-10 00:57:07.336437 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.336448 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.336458 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.336469 | orchestrator | 2025-09-10 00:57:07.336480 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-10 00:57:07.336490 | orchestrator | Wednesday 10 September 2025 00:54:40 +0000 (0:00:00.420) 0:00:48.073 *** 2025-09-10 00:57:07.336501 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.336562 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.336573 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.336584 | orchestrator | 2025-09-10 00:57:07.336595 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-10 00:57:07.336606 | orchestrator | Wednesday 10 September 2025 00:54:41 +0000 (0:00:00.650) 0:00:48.723 *** 2025-09-10 00:57:07.336617 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.336628 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.336638 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.336649 | orchestrator | 2025-09-10 00:57:07.336660 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-10 00:57:07.336670 | orchestrator | Wednesday 10 September 2025 00:54:41 +0000 (0:00:00.452) 0:00:49.176 *** 2025-09-10 00:57:07.336691 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.336702 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.336713 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.336723 | orchestrator | 2025-09-10 00:57:07.336734 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-10 00:57:07.336745 | orchestrator | Wednesday 10 September 2025 00:54:42 +0000 (0:00:00.425) 0:00:49.602 *** 2025-09-10 00:57:07.336755 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.336766 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.336776 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.336787 | orchestrator | 2025-09-10 00:57:07.336798 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-10 00:57:07.336809 | orchestrator | Wednesday 10 September 2025 00:54:42 +0000 (0:00:00.400) 0:00:50.002 *** 2025-09-10 00:57:07.336825 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.336836 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.336847 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.336858 | orchestrator | 2025-09-10 00:57:07.336868 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-10 00:57:07.336879 | orchestrator | Wednesday 10 September 2025 00:54:43 +0000 (0:00:00.869) 0:00:50.872 *** 2025-09-10 00:57:07.336890 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.336900 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.336911 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-10 00:57:07.336922 | orchestrator | 2025-09-10 00:57:07.336933 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-10 00:57:07.336944 | orchestrator | Wednesday 10 September 2025 00:54:43 +0000 (0:00:00.389) 0:00:51.261 *** 2025-09-10 00:57:07.336954 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.336965 | orchestrator | 2025-09-10 00:57:07.336976 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-10 00:57:07.336987 | orchestrator | Wednesday 10 September 2025 00:54:54 +0000 (0:00:10.235) 0:01:01.497 *** 2025-09-10 00:57:07.336998 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.337008 | orchestrator | 2025-09-10 00:57:07.337019 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-10 00:57:07.337030 | orchestrator | Wednesday 10 September 2025 00:54:54 +0000 (0:00:00.121) 0:01:01.618 *** 2025-09-10 00:57:07.337040 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.337051 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.337061 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.337070 | orchestrator | 2025-09-10 00:57:07.337080 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-10 00:57:07.337089 | orchestrator | Wednesday 10 September 2025 00:54:55 +0000 (0:00:01.062) 0:01:02.681 *** 2025-09-10 00:57:07.337099 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.337108 | orchestrator | 2025-09-10 00:57:07.337118 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-10 00:57:07.337127 | orchestrator | Wednesday 10 September 2025 00:55:03 +0000 (0:00:07.786) 0:01:10.467 *** 2025-09-10 00:57:07.337137 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.337147 | orchestrator | 2025-09-10 00:57:07.337156 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-10 00:57:07.337166 | orchestrator | Wednesday 10 September 2025 00:55:04 +0000 (0:00:01.617) 0:01:12.085 *** 2025-09-10 00:57:07.337175 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.337185 | orchestrator | 2025-09-10 00:57:07.337194 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-10 00:57:07.337204 | orchestrator | Wednesday 10 September 2025 00:55:07 +0000 (0:00:02.549) 0:01:14.634 *** 2025-09-10 00:57:07.337214 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.337223 | orchestrator | 2025-09-10 00:57:07.337233 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-10 00:57:07.337248 | orchestrator | Wednesday 10 September 2025 00:55:07 +0000 (0:00:00.118) 0:01:14.752 *** 2025-09-10 00:57:07.337257 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.337271 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.337281 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.337290 | orchestrator | 2025-09-10 00:57:07.337300 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-10 00:57:07.337310 | orchestrator | Wednesday 10 September 2025 00:55:07 +0000 (0:00:00.284) 0:01:15.036 *** 2025-09-10 00:57:07.337319 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.337329 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-10 00:57:07.337338 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:57:07.337348 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:57:07.337357 | orchestrator | 2025-09-10 00:57:07.337367 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-10 00:57:07.337376 | orchestrator | skipping: no hosts matched 2025-09-10 00:57:07.337386 | orchestrator | 2025-09-10 00:57:07.337395 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-10 00:57:07.337405 | orchestrator | 2025-09-10 00:57:07.337414 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-10 00:57:07.337424 | orchestrator | Wednesday 10 September 2025 00:55:08 +0000 (0:00:00.513) 0:01:15.550 *** 2025-09-10 00:57:07.337433 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:57:07.337442 | orchestrator | 2025-09-10 00:57:07.337452 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-10 00:57:07.337462 | orchestrator | Wednesday 10 September 2025 00:55:27 +0000 (0:00:19.495) 0:01:35.046 *** 2025-09-10 00:57:07.337471 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.337481 | orchestrator | 2025-09-10 00:57:07.337490 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-10 00:57:07.337500 | orchestrator | Wednesday 10 September 2025 00:55:48 +0000 (0:00:20.602) 0:01:55.649 *** 2025-09-10 00:57:07.337524 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.337534 | orchestrator | 2025-09-10 00:57:07.337543 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-10 00:57:07.337553 | orchestrator | 2025-09-10 00:57:07.337562 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-10 00:57:07.337572 | orchestrator | Wednesday 10 September 2025 00:55:50 +0000 (0:00:02.378) 0:01:58.028 *** 2025-09-10 00:57:07.337582 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:57:07.337591 | orchestrator | 2025-09-10 00:57:07.337601 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-10 00:57:07.337611 | orchestrator | Wednesday 10 September 2025 00:56:15 +0000 (0:00:24.991) 0:02:23.020 *** 2025-09-10 00:57:07.337620 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.337630 | orchestrator | 2025-09-10 00:57:07.337639 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-10 00:57:07.337649 | orchestrator | Wednesday 10 September 2025 00:56:32 +0000 (0:00:16.602) 0:02:39.622 *** 2025-09-10 00:57:07.337659 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.337668 | orchestrator | 2025-09-10 00:57:07.337678 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-10 00:57:07.337687 | orchestrator | 2025-09-10 00:57:07.337701 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-10 00:57:07.337711 | orchestrator | Wednesday 10 September 2025 00:56:34 +0000 (0:00:02.506) 0:02:42.129 *** 2025-09-10 00:57:07.337721 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.337730 | orchestrator | 2025-09-10 00:57:07.337740 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-10 00:57:07.337750 | orchestrator | Wednesday 10 September 2025 00:56:46 +0000 (0:00:12.066) 0:02:54.196 *** 2025-09-10 00:57:07.337759 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.337769 | orchestrator | 2025-09-10 00:57:07.337783 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-10 00:57:07.337793 | orchestrator | Wednesday 10 September 2025 00:56:51 +0000 (0:00:04.546) 0:02:58.742 *** 2025-09-10 00:57:07.337803 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.337813 | orchestrator | 2025-09-10 00:57:07.337822 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-10 00:57:07.337832 | orchestrator | 2025-09-10 00:57:07.337841 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-10 00:57:07.337851 | orchestrator | Wednesday 10 September 2025 00:56:54 +0000 (0:00:02.794) 0:03:01.537 *** 2025-09-10 00:57:07.337860 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:57:07.337870 | orchestrator | 2025-09-10 00:57:07.337880 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-10 00:57:07.337889 | orchestrator | Wednesday 10 September 2025 00:56:54 +0000 (0:00:00.558) 0:03:02.095 *** 2025-09-10 00:57:07.337899 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.337908 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.337918 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.337927 | orchestrator | 2025-09-10 00:57:07.337937 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-10 00:57:07.337946 | orchestrator | Wednesday 10 September 2025 00:56:56 +0000 (0:00:02.243) 0:03:04.339 *** 2025-09-10 00:57:07.337956 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.337965 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.337975 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.337985 | orchestrator | 2025-09-10 00:57:07.337994 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-10 00:57:07.338004 | orchestrator | Wednesday 10 September 2025 00:56:59 +0000 (0:00:02.175) 0:03:06.514 *** 2025-09-10 00:57:07.338042 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.338054 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.338063 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.338073 | orchestrator | 2025-09-10 00:57:07.338082 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-10 00:57:07.338092 | orchestrator | Wednesday 10 September 2025 00:57:01 +0000 (0:00:02.128) 0:03:08.643 *** 2025-09-10 00:57:07.338101 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.338111 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.338120 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:57:07.338130 | orchestrator | 2025-09-10 00:57:07.338139 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-10 00:57:07.338153 | orchestrator | Wednesday 10 September 2025 00:57:03 +0000 (0:00:02.051) 0:03:10.694 *** 2025-09-10 00:57:07.338163 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:57:07.338172 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:57:07.338182 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:57:07.338191 | orchestrator | 2025-09-10 00:57:07.338201 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-10 00:57:07.338210 | orchestrator | Wednesday 10 September 2025 00:57:06 +0000 (0:00:02.951) 0:03:13.646 *** 2025-09-10 00:57:07.338220 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:57:07.338229 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:57:07.338239 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:57:07.338248 | orchestrator | 2025-09-10 00:57:07.338258 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:57:07.338268 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-10 00:57:07.338278 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-10 00:57:07.338289 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-10 00:57:07.338304 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-10 00:57:07.338313 | orchestrator | 2025-09-10 00:57:07.338323 | orchestrator | 2025-09-10 00:57:07.338332 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:57:07.338342 | orchestrator | Wednesday 10 September 2025 00:57:06 +0000 (0:00:00.420) 0:03:14.067 *** 2025-09-10 00:57:07.338352 | orchestrator | =============================================================================== 2025-09-10 00:57:07.338361 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.49s 2025-09-10 00:57:07.338371 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.21s 2025-09-10 00:57:07.338380 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.07s 2025-09-10 00:57:07.338390 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.92s 2025-09-10 00:57:07.338399 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.24s 2025-09-10 00:57:07.338409 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.79s 2025-09-10 00:57:07.338424 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.89s 2025-09-10 00:57:07.338434 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.80s 2025-09-10 00:57:07.338443 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2025-09-10 00:57:07.338453 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.44s 2025-09-10 00:57:07.338462 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.99s 2025-09-10 00:57:07.338472 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.48s 2025-09-10 00:57:07.338481 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.31s 2025-09-10 00:57:07.338491 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.04s 2025-09-10 00:57:07.338501 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.96s 2025-09-10 00:57:07.338525 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2025-09-10 00:57:07.338535 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.79s 2025-09-10 00:57:07.338545 | orchestrator | Check MariaDB service --------------------------------------------------- 2.70s 2025-09-10 00:57:07.338554 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.55s 2025-09-10 00:57:07.338564 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.24s 2025-09-10 00:57:07.338574 | orchestrator | 2025-09-10 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:10.387220 | orchestrator | 2025-09-10 00:57:10 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:10.387311 | orchestrator | 2025-09-10 00:57:10 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:10.387682 | orchestrator | 2025-09-10 00:57:10 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:10.387784 | orchestrator | 2025-09-10 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:13.426605 | orchestrator | 2025-09-10 00:57:13 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:13.427541 | orchestrator | 2025-09-10 00:57:13 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:13.429997 | orchestrator | 2025-09-10 00:57:13 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:13.430331 | orchestrator | 2025-09-10 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:16.474075 | orchestrator | 2025-09-10 00:57:16 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:16.476456 | orchestrator | 2025-09-10 00:57:16 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:16.478069 | orchestrator | 2025-09-10 00:57:16 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:16.478336 | orchestrator | 2025-09-10 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:19.519438 | orchestrator | 2025-09-10 00:57:19 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:19.521858 | orchestrator | 2025-09-10 00:57:19 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:19.524229 | orchestrator | 2025-09-10 00:57:19 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:19.524740 | orchestrator | 2025-09-10 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:22.557794 | orchestrator | 2025-09-10 00:57:22 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:22.560755 | orchestrator | 2025-09-10 00:57:22 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:22.562488 | orchestrator | 2025-09-10 00:57:22 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:22.563189 | orchestrator | 2025-09-10 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:25.603519 | orchestrator | 2025-09-10 00:57:25 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:25.603995 | orchestrator | 2025-09-10 00:57:25 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:25.604017 | orchestrator | 2025-09-10 00:57:25 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:25.604029 | orchestrator | 2025-09-10 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:28.648339 | orchestrator | 2025-09-10 00:57:28 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:28.648768 | orchestrator | 2025-09-10 00:57:28 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:28.649619 | orchestrator | 2025-09-10 00:57:28 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:28.649644 | orchestrator | 2025-09-10 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:31.681077 | orchestrator | 2025-09-10 00:57:31 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:31.681744 | orchestrator | 2025-09-10 00:57:31 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:31.682607 | orchestrator | 2025-09-10 00:57:31 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:31.682645 | orchestrator | 2025-09-10 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:34.726754 | orchestrator | 2025-09-10 00:57:34 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:34.728843 | orchestrator | 2025-09-10 00:57:34 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:34.729888 | orchestrator | 2025-09-10 00:57:34 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:34.729914 | orchestrator | 2025-09-10 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:37.772004 | orchestrator | 2025-09-10 00:57:37 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:37.772487 | orchestrator | 2025-09-10 00:57:37 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:37.773280 | orchestrator | 2025-09-10 00:57:37 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:37.773538 | orchestrator | 2025-09-10 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:40.830650 | orchestrator | 2025-09-10 00:57:40 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:40.832034 | orchestrator | 2025-09-10 00:57:40 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:40.833866 | orchestrator | 2025-09-10 00:57:40 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:40.833892 | orchestrator | 2025-09-10 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:43.879486 | orchestrator | 2025-09-10 00:57:43 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:43.880646 | orchestrator | 2025-09-10 00:57:43 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:43.882751 | orchestrator | 2025-09-10 00:57:43 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:43.882781 | orchestrator | 2025-09-10 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:46.927473 | orchestrator | 2025-09-10 00:57:46 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:46.928923 | orchestrator | 2025-09-10 00:57:46 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:46.932840 | orchestrator | 2025-09-10 00:57:46 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:46.932865 | orchestrator | 2025-09-10 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:49.983404 | orchestrator | 2025-09-10 00:57:49 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:49.983479 | orchestrator | 2025-09-10 00:57:49 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:49.983491 | orchestrator | 2025-09-10 00:57:49 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:49.983501 | orchestrator | 2025-09-10 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:53.025213 | orchestrator | 2025-09-10 00:57:53 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:53.026848 | orchestrator | 2025-09-10 00:57:53 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:53.028311 | orchestrator | 2025-09-10 00:57:53 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:53.028343 | orchestrator | 2025-09-10 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:56.076685 | orchestrator | 2025-09-10 00:57:56 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:56.077994 | orchestrator | 2025-09-10 00:57:56 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:56.079397 | orchestrator | 2025-09-10 00:57:56 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:56.079425 | orchestrator | 2025-09-10 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:57:59.119677 | orchestrator | 2025-09-10 00:57:59 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:57:59.119793 | orchestrator | 2025-09-10 00:57:59 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:57:59.120952 | orchestrator | 2025-09-10 00:57:59 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:57:59.120977 | orchestrator | 2025-09-10 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:02.157100 | orchestrator | 2025-09-10 00:58:02 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:02.157800 | orchestrator | 2025-09-10 00:58:02 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:02.159242 | orchestrator | 2025-09-10 00:58:02 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:02.159268 | orchestrator | 2025-09-10 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:05.208685 | orchestrator | 2025-09-10 00:58:05 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:05.210374 | orchestrator | 2025-09-10 00:58:05 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:05.211942 | orchestrator | 2025-09-10 00:58:05 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:05.211965 | orchestrator | 2025-09-10 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:08.261893 | orchestrator | 2025-09-10 00:58:08 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:08.262447 | orchestrator | 2025-09-10 00:58:08 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:08.263972 | orchestrator | 2025-09-10 00:58:08 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:08.264068 | orchestrator | 2025-09-10 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:11.309354 | orchestrator | 2025-09-10 00:58:11 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:11.311126 | orchestrator | 2025-09-10 00:58:11 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:11.313835 | orchestrator | 2025-09-10 00:58:11 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:11.313883 | orchestrator | 2025-09-10 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:14.357665 | orchestrator | 2025-09-10 00:58:14 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:14.358997 | orchestrator | 2025-09-10 00:58:14 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:14.361168 | orchestrator | 2025-09-10 00:58:14 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:14.361202 | orchestrator | 2025-09-10 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:17.398222 | orchestrator | 2025-09-10 00:58:17 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:17.400837 | orchestrator | 2025-09-10 00:58:17 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:17.404356 | orchestrator | 2025-09-10 00:58:17 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:17.404391 | orchestrator | 2025-09-10 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:20.444272 | orchestrator | 2025-09-10 00:58:20 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:20.446737 | orchestrator | 2025-09-10 00:58:20 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:20.449908 | orchestrator | 2025-09-10 00:58:20 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:20.449936 | orchestrator | 2025-09-10 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:23.499593 | orchestrator | 2025-09-10 00:58:23 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:23.501240 | orchestrator | 2025-09-10 00:58:23 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state STARTED 2025-09-10 00:58:23.505067 | orchestrator | 2025-09-10 00:58:23 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:23.505153 | orchestrator | 2025-09-10 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:26.550260 | orchestrator | 2025-09-10 00:58:26 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:26.556021 | orchestrator | 2025-09-10 00:58:26 | INFO  | Task 98398807-3013-47aa-93fc-914c29011ea0 is in state SUCCESS 2025-09-10 00:58:26.558207 | orchestrator | 2025-09-10 00:58:26.558251 | orchestrator | 2025-09-10 00:58:26.558257 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-10 00:58:26.558261 | orchestrator | 2025-09-10 00:58:26.558265 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-10 00:58:26.558269 | orchestrator | Wednesday 10 September 2025 00:56:12 +0000 (0:00:00.587) 0:00:00.587 *** 2025-09-10 00:58:26.558273 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:58:26.558278 | orchestrator | 2025-09-10 00:58:26.558282 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-10 00:58:26.558285 | orchestrator | Wednesday 10 September 2025 00:56:13 +0000 (0:00:00.619) 0:00:01.206 *** 2025-09-10 00:58:26.558289 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558294 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558297 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558301 | orchestrator | 2025-09-10 00:58:26.558305 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-10 00:58:26.558308 | orchestrator | Wednesday 10 September 2025 00:56:14 +0000 (0:00:00.736) 0:00:01.943 *** 2025-09-10 00:58:26.558312 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558316 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558320 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558323 | orchestrator | 2025-09-10 00:58:26.558397 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-10 00:58:26.558404 | orchestrator | Wednesday 10 September 2025 00:56:14 +0000 (0:00:00.280) 0:00:02.223 *** 2025-09-10 00:58:26.558408 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558411 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558415 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558419 | orchestrator | 2025-09-10 00:58:26.558423 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-10 00:58:26.558426 | orchestrator | Wednesday 10 September 2025 00:56:15 +0000 (0:00:00.804) 0:00:03.028 *** 2025-09-10 00:58:26.558430 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558455 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558459 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558463 | orchestrator | 2025-09-10 00:58:26.558467 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-10 00:58:26.558471 | orchestrator | Wednesday 10 September 2025 00:56:15 +0000 (0:00:00.307) 0:00:03.335 *** 2025-09-10 00:58:26.558475 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558487 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558491 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558495 | orchestrator | 2025-09-10 00:58:26.558499 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-10 00:58:26.558513 | orchestrator | Wednesday 10 September 2025 00:56:15 +0000 (0:00:00.303) 0:00:03.639 *** 2025-09-10 00:58:26.558517 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558521 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558525 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558555 | orchestrator | 2025-09-10 00:58:26.558559 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-10 00:58:26.558563 | orchestrator | Wednesday 10 September 2025 00:56:16 +0000 (0:00:00.314) 0:00:03.954 *** 2025-09-10 00:58:26.558567 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.558571 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.558575 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.558579 | orchestrator | 2025-09-10 00:58:26.558582 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-10 00:58:26.558586 | orchestrator | Wednesday 10 September 2025 00:56:16 +0000 (0:00:00.538) 0:00:04.492 *** 2025-09-10 00:58:26.558590 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558594 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558597 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558601 | orchestrator | 2025-09-10 00:58:26.558605 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-10 00:58:26.558609 | orchestrator | Wednesday 10 September 2025 00:56:17 +0000 (0:00:00.326) 0:00:04.819 *** 2025-09-10 00:58:26.558612 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:58:26.558616 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:58:26.558620 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:58:26.558624 | orchestrator | 2025-09-10 00:58:26.558646 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-10 00:58:26.558650 | orchestrator | Wednesday 10 September 2025 00:56:17 +0000 (0:00:00.635) 0:00:05.454 *** 2025-09-10 00:58:26.558654 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.558657 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.558661 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.558665 | orchestrator | 2025-09-10 00:58:26.558669 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-10 00:58:26.558672 | orchestrator | Wednesday 10 September 2025 00:56:18 +0000 (0:00:00.406) 0:00:05.860 *** 2025-09-10 00:58:26.558676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:58:26.558680 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:58:26.558683 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:58:26.558687 | orchestrator | 2025-09-10 00:58:26.558691 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-10 00:58:26.558695 | orchestrator | Wednesday 10 September 2025 00:56:20 +0000 (0:00:02.198) 0:00:08.058 *** 2025-09-10 00:58:26.558699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-10 00:58:26.558736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-10 00:58:26.558742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-10 00:58:26.558746 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.558750 | orchestrator | 2025-09-10 00:58:26.558754 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-10 00:58:26.558766 | orchestrator | Wednesday 10 September 2025 00:56:20 +0000 (0:00:00.383) 0:00:08.442 *** 2025-09-10 00:58:26.558771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.558775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.558783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.558787 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.558791 | orchestrator | 2025-09-10 00:58:26.559221 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-10 00:58:26.559229 | orchestrator | Wednesday 10 September 2025 00:56:21 +0000 (0:00:00.792) 0:00:09.234 *** 2025-09-10 00:58:26.559233 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559249 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559253 | orchestrator | 2025-09-10 00:58:26.559257 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-10 00:58:26.559261 | orchestrator | Wednesday 10 September 2025 00:56:21 +0000 (0:00:00.153) 0:00:09.388 *** 2025-09-10 00:58:26.559265 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '45a3d1f7df9a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-10 00:56:18.769688', 'end': '2025-09-10 00:56:18.818955', 'delta': '0:00:00.049267', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['45a3d1f7df9a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-10 00:58:26.559271 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd0ab6e71e5da', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-10 00:56:19.568443', 'end': '2025-09-10 00:56:19.612293', 'delta': '0:00:00.043850', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0ab6e71e5da'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-10 00:58:26.559288 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '04d986fa8d3d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-10 00:56:20.134100', 'end': '2025-09-10 00:56:20.170567', 'delta': '0:00:00.036467', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['04d986fa8d3d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-10 00:58:26.559297 | orchestrator | 2025-09-10 00:58:26.559317 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-10 00:58:26.559321 | orchestrator | Wednesday 10 September 2025 00:56:22 +0000 (0:00:00.473) 0:00:09.861 *** 2025-09-10 00:58:26.559324 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.559328 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.559332 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.559336 | orchestrator | 2025-09-10 00:58:26.559339 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-10 00:58:26.559343 | orchestrator | Wednesday 10 September 2025 00:56:22 +0000 (0:00:00.430) 0:00:10.292 *** 2025-09-10 00:58:26.559347 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-10 00:58:26.559351 | orchestrator | 2025-09-10 00:58:26.559355 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-10 00:58:26.559358 | orchestrator | Wednesday 10 September 2025 00:56:24 +0000 (0:00:01.855) 0:00:12.148 *** 2025-09-10 00:58:26.559362 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559366 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559370 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559373 | orchestrator | 2025-09-10 00:58:26.559377 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-10 00:58:26.559381 | orchestrator | Wednesday 10 September 2025 00:56:24 +0000 (0:00:00.281) 0:00:12.429 *** 2025-09-10 00:58:26.559384 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559390 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559394 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559398 | orchestrator | 2025-09-10 00:58:26.559402 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-10 00:58:26.559406 | orchestrator | Wednesday 10 September 2025 00:56:25 +0000 (0:00:00.433) 0:00:12.863 *** 2025-09-10 00:58:26.559409 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559413 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559417 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559421 | orchestrator | 2025-09-10 00:58:26.559424 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-10 00:58:26.559428 | orchestrator | Wednesday 10 September 2025 00:56:25 +0000 (0:00:00.500) 0:00:13.363 *** 2025-09-10 00:58:26.559432 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.559435 | orchestrator | 2025-09-10 00:58:26.559439 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-10 00:58:26.559443 | orchestrator | Wednesday 10 September 2025 00:56:25 +0000 (0:00:00.144) 0:00:13.508 *** 2025-09-10 00:58:26.559447 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559450 | orchestrator | 2025-09-10 00:58:26.559454 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-10 00:58:26.559458 | orchestrator | Wednesday 10 September 2025 00:56:25 +0000 (0:00:00.225) 0:00:13.733 *** 2025-09-10 00:58:26.559462 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559465 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559469 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559473 | orchestrator | 2025-09-10 00:58:26.559476 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-10 00:58:26.559480 | orchestrator | Wednesday 10 September 2025 00:56:26 +0000 (0:00:00.272) 0:00:14.006 *** 2025-09-10 00:58:26.559487 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559490 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559494 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559498 | orchestrator | 2025-09-10 00:58:26.559501 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-10 00:58:26.559505 | orchestrator | Wednesday 10 September 2025 00:56:26 +0000 (0:00:00.332) 0:00:14.338 *** 2025-09-10 00:58:26.559509 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559513 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559516 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559520 | orchestrator | 2025-09-10 00:58:26.559524 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-10 00:58:26.559527 | orchestrator | Wednesday 10 September 2025 00:56:27 +0000 (0:00:00.604) 0:00:14.943 *** 2025-09-10 00:58:26.559531 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559535 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559539 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559542 | orchestrator | 2025-09-10 00:58:26.559546 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-10 00:58:26.559550 | orchestrator | Wednesday 10 September 2025 00:56:27 +0000 (0:00:00.342) 0:00:15.286 *** 2025-09-10 00:58:26.559554 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559557 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559561 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559565 | orchestrator | 2025-09-10 00:58:26.559568 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-10 00:58:26.559572 | orchestrator | Wednesday 10 September 2025 00:56:27 +0000 (0:00:00.339) 0:00:15.626 *** 2025-09-10 00:58:26.559576 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559580 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559583 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559587 | orchestrator | 2025-09-10 00:58:26.559591 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-10 00:58:26.559605 | orchestrator | Wednesday 10 September 2025 00:56:28 +0000 (0:00:00.297) 0:00:15.923 *** 2025-09-10 00:58:26.559609 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559613 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559617 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559620 | orchestrator | 2025-09-10 00:58:26.559641 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-10 00:58:26.559646 | orchestrator | Wednesday 10 September 2025 00:56:28 +0000 (0:00:00.519) 0:00:16.443 *** 2025-09-10 00:58:26.559650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea', 'dm-uuid-LVM-uE5Yjf2CsxkFgHgIpbKsPiyHm2TurikN3S280Dz2nod0tzAwO5S1pyjk2inle8Pf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f', 'dm-uuid-LVM-ZI1l2hrd5ozIdIPbGSORiFKfU4pLhNqBcQz7LQPLKX2159t3r3EwXsxnR1q2MZN6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a', 'dm-uuid-LVM-2hB5Q2a5udGrsgyYcPzLEBRo1qDiEpux1eUKdeDY1QiWJ6egp5CTMxBgXWbEK8V7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca', 'dm-uuid-LVM-k27w3X3DUUX1XZAerGiKa0AfnUAShbWdavK2lWdW2BR1Va40lsJhz6WVU6V9WMmo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YtsAg8-uQKK-fp6U-2Eoq-WWNO-UgLL-3GllCz', 'scsi-0QEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6', 'scsi-SQEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1at0Au-Fg2h-xPI3-6SzS-AeD9-5EM6-mEzMll', 'scsi-0QEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd', 'scsi-SQEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757', 'scsi-SQEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56OBMz-I7EX-VavL-tZwu-3Gki-M3Zl-yWbhOp', 'scsi-0QEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00', 'scsi-SQEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d25IX1-hlTn-ZBV2-xwSV-H1En-zkyU-H6wSTA', 'scsi-0QEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e', 'scsi-SQEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb', 'scsi-SQEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559829 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.559833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559837 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.559841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7', 'dm-uuid-LVM-UcEdMXyLpheVryiFjGGikHOHzacaQqtC6drg4fiUEBxjwrRysilyddDiBDve0Xzr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466', 'dm-uuid-LVM-yNmKPiSdCRM90Ij0ZxbCNNJY3U3c3fnFtFxM8rdvExiBtaoR2TkcVtvQsk8io0dz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-10 00:58:26.559895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W5LkY6-eCu4-yrK5-dhKk-hJo8-SW0n-F6I41f', 'scsi-0QEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c', 'scsi-SQEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0R9dBU-VzRA-BtuH-Y8EJ-iZ82-XXeb-Hwxfe0', 'scsi-0QEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901', 'scsi-SQEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c', 'scsi-SQEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-10 00:58:26.559926 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.559930 | orchestrator | 2025-09-10 00:58:26.559935 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-10 00:58:26.559942 | orchestrator | Wednesday 10 September 2025 00:56:29 +0000 (0:00:00.601) 0:00:17.044 *** 2025-09-10 00:58:26.559947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea', 'dm-uuid-LVM-uE5Yjf2CsxkFgHgIpbKsPiyHm2TurikN3S280Dz2nod0tzAwO5S1pyjk2inle8Pf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f', 'dm-uuid-LVM-ZI1l2hrd5ozIdIPbGSORiFKfU4pLhNqBcQz7LQPLKX2159t3r3EwXsxnR1q2MZN6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a', 'dm-uuid-LVM-2hB5Q2a5udGrsgyYcPzLEBRo1qDiEpux1eUKdeDY1QiWJ6egp5CTMxBgXWbEK8V7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca', 'dm-uuid-LVM-k27w3X3DUUX1XZAerGiKa0AfnUAShbWdavK2lWdW2BR1Va40lsJhz6WVU6V9WMmo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.559997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16', 'scsi-SQEMU_QEMU_HARDDISK_885f5351-8a1d-42ab-b6e2-24d16c7f1b28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4b73e898--cb4c--523f--8aca--971ee560c7ea-osd--block--4b73e898--cb4c--523f--8aca--971ee560c7ea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YtsAg8-uQKK-fp6U-2Eoq-WWNO-UgLL-3GllCz', 'scsi-0QEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6', 'scsi-SQEMU_QEMU_HARDDISK_15be4489-a4ef-46b7-8669-fa4e45790ef6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2bea83b6--6800--529c--bdd8--a613f3421a6f-osd--block--2bea83b6--6800--529c--bdd8--a613f3421a6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1at0Au-Fg2h-xPI3-6SzS-AeD9-5EM6-mEzMll', 'scsi-0QEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd', 'scsi-SQEMU_QEMU_HARDDISK_2400494c-3cf2-4780-9c6b-527d492c6bfd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757', 'scsi-SQEMU_QEMU_HARDDISK_b761a3bc-d220-47d2-9376-37cff0079757'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560093 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d1bd04c-2fb3-49ff-963e-06aac3d99067-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--20419d67--2a88--5ee6--832e--dd0a34a7687a-osd--block--20419d67--2a88--5ee6--832e--dd0a34a7687a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56OBMz-I7EX-VavL-tZwu-3Gki-M3Zl-yWbhOp', 'scsi-0QEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00', 'scsi-SQEMU_QEMU_HARDDISK_4b58dc27-9074-4d6f-a7a6-fd15259a7e00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560115 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--28e77ae9--929e--5c68--8a2a--91f3bea00aca-osd--block--28e77ae9--929e--5c68--8a2a--91f3bea00aca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d25IX1-hlTn-ZBV2-xwSV-H1En-zkyU-H6wSTA', 'scsi-0QEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e', 'scsi-SQEMU_QEMU_HARDDISK_4a1ca5dc-6f40-4107-905c-b866d721086e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560120 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7', 'dm-uuid-LVM-UcEdMXyLpheVryiFjGGikHOHzacaQqtC6drg4fiUEBxjwrRysilyddDiBDve0Xzr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb', 'scsi-SQEMU_QEMU_HARDDISK_9ffc2c63-19a1-4e5a-ab50-a28911e045bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466', 'dm-uuid-LVM-yNmKPiSdCRM90Ij0ZxbCNNJY3U3c3fnFtFxM8rdvExiBtaoR2TkcVtvQsk8io0dz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560145 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560158 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16', 'scsi-SQEMU_QEMU_HARDDISK_76e3aa19-82bc-491d-ab0d-cf92a10f04de-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--36dac960--67a7--54a4--bbd2--b6f8976b18f7-osd--block--36dac960--67a7--54a4--bbd2--b6f8976b18f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W5LkY6-eCu4-yrK5-dhKk-hJo8-SW0n-F6I41f', 'scsi-0QEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c', 'scsi-SQEMU_QEMU_HARDDISK_2ea24e78-5d32-46b6-abea-13531085710c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f4115e81--926e--57fb--8145--65084efa4466-osd--block--f4115e81--926e--57fb--8145--65084efa4466'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0R9dBU-VzRA-BtuH-Y8EJ-iZ82-XXeb-Hwxfe0', 'scsi-0QEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901', 'scsi-SQEMU_QEMU_HARDDISK_b86cce0a-40ed-4a07-99f1-19becb84c901'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c', 'scsi-SQEMU_QEMU_HARDDISK_d735b4b4-15bb-46a1-a658-203c9ec5fb9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560225 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-10-00-02-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-10 00:58:26.560229 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560233 | orchestrator | 2025-09-10 00:58:26.560236 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-10 00:58:26.560240 | orchestrator | Wednesday 10 September 2025 00:56:29 +0000 (0:00:00.583) 0:00:17.628 *** 2025-09-10 00:58:26.560244 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.560248 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.560252 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.560255 | orchestrator | 2025-09-10 00:58:26.560259 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-10 00:58:26.560263 | orchestrator | Wednesday 10 September 2025 00:56:30 +0000 (0:00:00.726) 0:00:18.354 *** 2025-09-10 00:58:26.560266 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.560270 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.560274 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.560278 | orchestrator | 2025-09-10 00:58:26.560281 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-10 00:58:26.560285 | orchestrator | Wednesday 10 September 2025 00:56:31 +0000 (0:00:00.559) 0:00:18.914 *** 2025-09-10 00:58:26.560289 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.560292 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.560296 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.560300 | orchestrator | 2025-09-10 00:58:26.560304 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-10 00:58:26.560307 | orchestrator | Wednesday 10 September 2025 00:56:31 +0000 (0:00:00.661) 0:00:19.575 *** 2025-09-10 00:58:26.560311 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560315 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560319 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560322 | orchestrator | 2025-09-10 00:58:26.560326 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-10 00:58:26.560330 | orchestrator | Wednesday 10 September 2025 00:56:32 +0000 (0:00:00.318) 0:00:19.894 *** 2025-09-10 00:58:26.560333 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560337 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560341 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560345 | orchestrator | 2025-09-10 00:58:26.560348 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-10 00:58:26.560352 | orchestrator | Wednesday 10 September 2025 00:56:32 +0000 (0:00:00.395) 0:00:20.289 *** 2025-09-10 00:58:26.560356 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560359 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560375 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560383 | orchestrator | 2025-09-10 00:58:26.560387 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-10 00:58:26.560391 | orchestrator | Wednesday 10 September 2025 00:56:33 +0000 (0:00:00.528) 0:00:20.818 *** 2025-09-10 00:58:26.560394 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-10 00:58:26.560398 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-10 00:58:26.560402 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-10 00:58:26.560406 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-10 00:58:26.560409 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-10 00:58:26.560413 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-10 00:58:26.560417 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-10 00:58:26.560421 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-10 00:58:26.560424 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-10 00:58:26.560428 | orchestrator | 2025-09-10 00:58:26.560432 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-10 00:58:26.560435 | orchestrator | Wednesday 10 September 2025 00:56:33 +0000 (0:00:00.819) 0:00:21.637 *** 2025-09-10 00:58:26.560439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-10 00:58:26.560443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-10 00:58:26.560446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-10 00:58:26.560450 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560454 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-10 00:58:26.560458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-10 00:58:26.560461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-10 00:58:26.560465 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560468 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-10 00:58:26.560472 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-10 00:58:26.560476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-10 00:58:26.560479 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560483 | orchestrator | 2025-09-10 00:58:26.560487 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-10 00:58:26.560490 | orchestrator | Wednesday 10 September 2025 00:56:34 +0000 (0:00:00.348) 0:00:21.986 *** 2025-09-10 00:58:26.560494 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 00:58:26.560498 | orchestrator | 2025-09-10 00:58:26.560502 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-10 00:58:26.560506 | orchestrator | Wednesday 10 September 2025 00:56:34 +0000 (0:00:00.770) 0:00:22.757 *** 2025-09-10 00:58:26.560510 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560514 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560517 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560521 | orchestrator | 2025-09-10 00:58:26.560527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-10 00:58:26.560531 | orchestrator | Wednesday 10 September 2025 00:56:35 +0000 (0:00:00.341) 0:00:23.098 *** 2025-09-10 00:58:26.560534 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560538 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560542 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560545 | orchestrator | 2025-09-10 00:58:26.560549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-10 00:58:26.560553 | orchestrator | Wednesday 10 September 2025 00:56:35 +0000 (0:00:00.332) 0:00:23.431 *** 2025-09-10 00:58:26.560556 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560560 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560567 | orchestrator | skipping: [testbed-node-5] 2025-09-10 00:58:26.560570 | orchestrator | 2025-09-10 00:58:26.560574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-10 00:58:26.560578 | orchestrator | Wednesday 10 September 2025 00:56:36 +0000 (0:00:00.339) 0:00:23.771 *** 2025-09-10 00:58:26.560581 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.560585 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.560589 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.560592 | orchestrator | 2025-09-10 00:58:26.560596 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-10 00:58:26.560600 | orchestrator | Wednesday 10 September 2025 00:56:36 +0000 (0:00:00.621) 0:00:24.392 *** 2025-09-10 00:58:26.560604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:58:26.560607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:58:26.560611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:58:26.560614 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560618 | orchestrator | 2025-09-10 00:58:26.560622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-10 00:58:26.560634 | orchestrator | Wednesday 10 September 2025 00:56:37 +0000 (0:00:00.383) 0:00:24.775 *** 2025-09-10 00:58:26.560638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:58:26.560642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:58:26.560646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:58:26.560649 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560653 | orchestrator | 2025-09-10 00:58:26.560659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-10 00:58:26.560663 | orchestrator | Wednesday 10 September 2025 00:56:37 +0000 (0:00:00.378) 0:00:25.154 *** 2025-09-10 00:58:26.560666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-10 00:58:26.560670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-10 00:58:26.560674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-10 00:58:26.560677 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560681 | orchestrator | 2025-09-10 00:58:26.560685 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-10 00:58:26.560688 | orchestrator | Wednesday 10 September 2025 00:56:37 +0000 (0:00:00.394) 0:00:25.549 *** 2025-09-10 00:58:26.560692 | orchestrator | ok: [testbed-node-3] 2025-09-10 00:58:26.560696 | orchestrator | ok: [testbed-node-4] 2025-09-10 00:58:26.560700 | orchestrator | ok: [testbed-node-5] 2025-09-10 00:58:26.560703 | orchestrator | 2025-09-10 00:58:26.560707 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-10 00:58:26.560711 | orchestrator | Wednesday 10 September 2025 00:56:38 +0000 (0:00:00.383) 0:00:25.932 *** 2025-09-10 00:58:26.560715 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-10 00:58:26.560718 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-10 00:58:26.560722 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-10 00:58:26.560726 | orchestrator | 2025-09-10 00:58:26.560729 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-10 00:58:26.560733 | orchestrator | Wednesday 10 September 2025 00:56:38 +0000 (0:00:00.524) 0:00:26.456 *** 2025-09-10 00:58:26.560737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:58:26.560741 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:58:26.560744 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:58:26.560748 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-10 00:58:26.560752 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-10 00:58:26.560756 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-10 00:58:26.560762 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-10 00:58:26.560766 | orchestrator | 2025-09-10 00:58:26.560770 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-10 00:58:26.560773 | orchestrator | Wednesday 10 September 2025 00:56:39 +0000 (0:00:01.068) 0:00:27.525 *** 2025-09-10 00:58:26.560777 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-10 00:58:26.560781 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-10 00:58:26.560784 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-10 00:58:26.560788 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-10 00:58:26.560792 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-10 00:58:26.560795 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-10 00:58:26.560799 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-10 00:58:26.560803 | orchestrator | 2025-09-10 00:58:26.560808 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-10 00:58:26.560812 | orchestrator | Wednesday 10 September 2025 00:56:41 +0000 (0:00:02.169) 0:00:29.694 *** 2025-09-10 00:58:26.560816 | orchestrator | skipping: [testbed-node-3] 2025-09-10 00:58:26.560820 | orchestrator | skipping: [testbed-node-4] 2025-09-10 00:58:26.560823 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-10 00:58:26.560827 | orchestrator | 2025-09-10 00:58:26.560831 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-10 00:58:26.560835 | orchestrator | Wednesday 10 September 2025 00:56:42 +0000 (0:00:00.374) 0:00:30.069 *** 2025-09-10 00:58:26.560839 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:58:26.560843 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:58:26.560847 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:58:26.560852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:58:26.560856 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-10 00:58:26.560860 | orchestrator | 2025-09-10 00:58:26.560864 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-10 00:58:26.560867 | orchestrator | Wednesday 10 September 2025 00:57:29 +0000 (0:00:46.903) 0:01:16.972 *** 2025-09-10 00:58:26.560871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560875 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560879 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560885 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560888 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560896 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-10 00:58:26.560899 | orchestrator | 2025-09-10 00:58:26.560903 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-10 00:58:26.560907 | orchestrator | Wednesday 10 September 2025 00:57:54 +0000 (0:00:24.881) 0:01:41.854 *** 2025-09-10 00:58:26.560910 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560914 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560918 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560921 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560925 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560929 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560932 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-10 00:58:26.560936 | orchestrator | 2025-09-10 00:58:26.560940 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-10 00:58:26.560943 | orchestrator | Wednesday 10 September 2025 00:58:06 +0000 (0:00:12.133) 0:01:53.987 *** 2025-09-10 00:58:26.560947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560951 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:58:26.560955 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:58:26.560958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560962 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:58:26.560966 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:58:26.560971 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560975 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:58:26.560979 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:58:26.560983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560986 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:58:26.560990 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:58:26.560994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.560997 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:58:26.561001 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:58:26.561005 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-10 00:58:26.561009 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-10 00:58:26.561012 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-10 00:58:26.561016 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-10 00:58:26.561020 | orchestrator | 2025-09-10 00:58:26.561023 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:58:26.561027 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-10 00:58:26.561034 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-10 00:58:26.561038 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-10 00:58:26.561042 | orchestrator | 2025-09-10 00:58:26.561045 | orchestrator | 2025-09-10 00:58:26.561049 | orchestrator | 2025-09-10 00:58:26.561055 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:58:26.561059 | orchestrator | Wednesday 10 September 2025 00:58:23 +0000 (0:00:17.151) 0:02:11.139 *** 2025-09-10 00:58:26.561063 | orchestrator | =============================================================================== 2025-09-10 00:58:26.561066 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.90s 2025-09-10 00:58:26.561070 | orchestrator | generate keys ---------------------------------------------------------- 24.88s 2025-09-10 00:58:26.561074 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.15s 2025-09-10 00:58:26.561078 | orchestrator | get keys from monitors ------------------------------------------------- 12.13s 2025-09-10 00:58:26.561081 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2025-09-10 00:58:26.561085 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.17s 2025-09-10 00:58:26.561089 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.86s 2025-09-10 00:58:26.561093 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2025-09-10 00:58:26.561096 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2025-09-10 00:58:26.561100 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-09-10 00:58:26.561104 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2025-09-10 00:58:26.561107 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.77s 2025-09-10 00:58:26.561111 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.74s 2025-09-10 00:58:26.561115 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2025-09-10 00:58:26.561118 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-09-10 00:58:26.561122 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2025-09-10 00:58:26.561126 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2025-09-10 00:58:26.561129 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2025-09-10 00:58:26.561133 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.60s 2025-09-10 00:58:26.561137 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.60s 2025-09-10 00:58:26.562525 | orchestrator | 2025-09-10 00:58:26 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:26.564842 | orchestrator | 2025-09-10 00:58:26 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:26.564861 | orchestrator | 2025-09-10 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:29.617797 | orchestrator | 2025-09-10 00:58:29 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:29.619456 | orchestrator | 2025-09-10 00:58:29 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:29.621035 | orchestrator | 2025-09-10 00:58:29 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:29.621057 | orchestrator | 2025-09-10 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:32.676888 | orchestrator | 2025-09-10 00:58:32 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:32.678366 | orchestrator | 2025-09-10 00:58:32 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:32.681626 | orchestrator | 2025-09-10 00:58:32 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:32.681669 | orchestrator | 2025-09-10 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:35.735302 | orchestrator | 2025-09-10 00:58:35 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:35.736392 | orchestrator | 2025-09-10 00:58:35 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:35.737546 | orchestrator | 2025-09-10 00:58:35 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:35.737908 | orchestrator | 2025-09-10 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:38.784777 | orchestrator | 2025-09-10 00:58:38 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:38.787406 | orchestrator | 2025-09-10 00:58:38 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:38.789023 | orchestrator | 2025-09-10 00:58:38 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:38.789499 | orchestrator | 2025-09-10 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:41.843640 | orchestrator | 2025-09-10 00:58:41 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:41.844641 | orchestrator | 2025-09-10 00:58:41 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:41.846077 | orchestrator | 2025-09-10 00:58:41 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:41.846219 | orchestrator | 2025-09-10 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:44.890205 | orchestrator | 2025-09-10 00:58:44 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:44.891376 | orchestrator | 2025-09-10 00:58:44 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:44.893019 | orchestrator | 2025-09-10 00:58:44 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:44.893046 | orchestrator | 2025-09-10 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:47.945572 | orchestrator | 2025-09-10 00:58:47 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:47.947978 | orchestrator | 2025-09-10 00:58:47 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:47.949855 | orchestrator | 2025-09-10 00:58:47 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:47.949890 | orchestrator | 2025-09-10 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:51.002392 | orchestrator | 2025-09-10 00:58:51 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:51.003591 | orchestrator | 2025-09-10 00:58:51 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state STARTED 2025-09-10 00:58:51.005937 | orchestrator | 2025-09-10 00:58:51 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:51.006658 | orchestrator | 2025-09-10 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:54.057160 | orchestrator | 2025-09-10 00:58:54 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:54.057272 | orchestrator | 2025-09-10 00:58:54 | INFO  | Task 78705445-47fd-46fc-957a-1acb5471f67a is in state SUCCESS 2025-09-10 00:58:54.061373 | orchestrator | 2025-09-10 00:58:54 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:54.061567 | orchestrator | 2025-09-10 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:58:57.121120 | orchestrator | 2025-09-10 00:58:57 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:58:57.123473 | orchestrator | 2025-09-10 00:58:57 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:58:57.127231 | orchestrator | 2025-09-10 00:58:57 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:58:57.127761 | orchestrator | 2025-09-10 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:00.171516 | orchestrator | 2025-09-10 00:59:00 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state STARTED 2025-09-10 00:59:00.172361 | orchestrator | 2025-09-10 00:59:00 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:00.173421 | orchestrator | 2025-09-10 00:59:00 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:00.173450 | orchestrator | 2025-09-10 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:03.225735 | orchestrator | 2025-09-10 00:59:03 | INFO  | Task bef651aa-bfdf-41e3-a2ca-a92bab4e5a06 is in state SUCCESS 2025-09-10 00:59:03.226531 | orchestrator | 2025-09-10 00:59:03.226563 | orchestrator | 2025-09-10 00:59:03.226575 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-10 00:59:03.226585 | orchestrator | 2025-09-10 00:59:03.226595 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-10 00:59:03.226606 | orchestrator | Wednesday 10 September 2025 00:58:28 +0000 (0:00:00.158) 0:00:00.158 *** 2025-09-10 00:59:03.226616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-10 00:59:03.226627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.226637 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.226646 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-10 00:59:03.226656 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.226666 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-10 00:59:03.226707 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-10 00:59:03.226718 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-10 00:59:03.226728 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-10 00:59:03.226780 | orchestrator | 2025-09-10 00:59:03.226834 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-10 00:59:03.227096 | orchestrator | Wednesday 10 September 2025 00:58:32 +0000 (0:00:04.256) 0:00:04.415 *** 2025-09-10 00:59:03.227107 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-10 00:59:03.227118 | orchestrator | 2025-09-10 00:59:03.227128 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-10 00:59:03.227138 | orchestrator | Wednesday 10 September 2025 00:58:33 +0000 (0:00:00.958) 0:00:05.373 *** 2025-09-10 00:59:03.227148 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-10 00:59:03.227158 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.227187 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.227198 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-10 00:59:03.227207 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.227217 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-10 00:59:03.227226 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-10 00:59:03.227236 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-10 00:59:03.227245 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-10 00:59:03.227254 | orchestrator | 2025-09-10 00:59:03.227264 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-10 00:59:03.227273 | orchestrator | Wednesday 10 September 2025 00:58:46 +0000 (0:00:13.092) 0:00:18.466 *** 2025-09-10 00:59:03.227283 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-10 00:59:03.227292 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.227302 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.227311 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-10 00:59:03.227321 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-10 00:59:03.227330 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-10 00:59:03.227340 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-10 00:59:03.227349 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-10 00:59:03.227358 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-10 00:59:03.227368 | orchestrator | 2025-09-10 00:59:03.227377 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:59:03.227387 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 00:59:03.227397 | orchestrator | 2025-09-10 00:59:03.227407 | orchestrator | 2025-09-10 00:59:03.227416 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:59:03.227425 | orchestrator | Wednesday 10 September 2025 00:58:52 +0000 (0:00:06.536) 0:00:25.002 *** 2025-09-10 00:59:03.227435 | orchestrator | =============================================================================== 2025-09-10 00:59:03.227444 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.09s 2025-09-10 00:59:03.227454 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.54s 2025-09-10 00:59:03.227463 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.26s 2025-09-10 00:59:03.227472 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2025-09-10 00:59:03.227482 | orchestrator | 2025-09-10 00:59:03.227491 | orchestrator | 2025-09-10 00:59:03.227501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 00:59:03.227510 | orchestrator | 2025-09-10 00:59:03.227529 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 00:59:03.227539 | orchestrator | Wednesday 10 September 2025 00:57:11 +0000 (0:00:00.274) 0:00:00.274 *** 2025-09-10 00:59:03.227549 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.227558 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.227568 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.227577 | orchestrator | 2025-09-10 00:59:03.227587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 00:59:03.227596 | orchestrator | Wednesday 10 September 2025 00:57:11 +0000 (0:00:00.281) 0:00:00.555 *** 2025-09-10 00:59:03.227612 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-10 00:59:03.227622 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-10 00:59:03.227632 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-10 00:59:03.227641 | orchestrator | 2025-09-10 00:59:03.227651 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-10 00:59:03.227661 | orchestrator | 2025-09-10 00:59:03.227670 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-10 00:59:03.227696 | orchestrator | Wednesday 10 September 2025 00:57:11 +0000 (0:00:00.424) 0:00:00.979 *** 2025-09-10 00:59:03.227713 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:59:03.227725 | orchestrator | 2025-09-10 00:59:03.227736 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-10 00:59:03.227747 | orchestrator | Wednesday 10 September 2025 00:57:12 +0000 (0:00:00.478) 0:00:01.458 *** 2025-09-10 00:59:03.227764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.227798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.227820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.227908 | orchestrator | 2025-09-10 00:59:03.227919 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-10 00:59:03.227931 | orchestrator | Wednesday 10 September 2025 00:57:13 +0000 (0:00:01.094) 0:00:02.553 *** 2025-09-10 00:59:03.227943 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.227954 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.227965 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.227976 | orchestrator | 2025-09-10 00:59:03.227988 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-10 00:59:03.228006 | orchestrator | Wednesday 10 September 2025 00:57:13 +0000 (0:00:00.449) 0:00:03.002 *** 2025-09-10 00:59:03.228017 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-10 00:59:03.228029 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-10 00:59:03.228047 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-10 00:59:03.228059 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-10 00:59:03.228071 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-10 00:59:03.228082 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-10 00:59:03.228091 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-10 00:59:03.228100 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-10 00:59:03.228110 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-10 00:59:03.228119 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-10 00:59:03.228129 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-10 00:59:03.228138 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-10 00:59:03.228153 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-10 00:59:03.228163 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-10 00:59:03.228172 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-10 00:59:03.228182 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-10 00:59:03.228191 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-10 00:59:03.228201 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-10 00:59:03.228210 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-10 00:59:03.228219 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-10 00:59:03.228229 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-10 00:59:03.228238 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-10 00:59:03.228247 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-10 00:59:03.228257 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-10 00:59:03.228267 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-10 00:59:03.228278 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-10 00:59:03.228288 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-10 00:59:03.228297 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-10 00:59:03.228307 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-10 00:59:03.228316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-10 00:59:03.228325 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-10 00:59:03.228341 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-10 00:59:03.228350 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-10 00:59:03.228359 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-10 00:59:03.228369 | orchestrator | 2025-09-10 00:59:03.228378 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.228387 | orchestrator | Wednesday 10 September 2025 00:57:14 +0000 (0:00:00.748) 0:00:03.751 *** 2025-09-10 00:59:03.228397 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.228407 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.228416 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.228425 | orchestrator | 2025-09-10 00:59:03.228435 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.228444 | orchestrator | Wednesday 10 September 2025 00:57:14 +0000 (0:00:00.318) 0:00:04.069 *** 2025-09-10 00:59:03.228453 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228463 | orchestrator | 2025-09-10 00:59:03.228472 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.228486 | orchestrator | Wednesday 10 September 2025 00:57:14 +0000 (0:00:00.136) 0:00:04.205 *** 2025-09-10 00:59:03.228496 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228506 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.228515 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.228525 | orchestrator | 2025-09-10 00:59:03.228534 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.228544 | orchestrator | Wednesday 10 September 2025 00:57:15 +0000 (0:00:00.473) 0:00:04.679 *** 2025-09-10 00:59:03.228553 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.228563 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.228572 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.228582 | orchestrator | 2025-09-10 00:59:03.228591 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.228601 | orchestrator | Wednesday 10 September 2025 00:57:15 +0000 (0:00:00.311) 0:00:04.990 *** 2025-09-10 00:59:03.228610 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228620 | orchestrator | 2025-09-10 00:59:03.228630 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.228639 | orchestrator | Wednesday 10 September 2025 00:57:15 +0000 (0:00:00.127) 0:00:05.118 *** 2025-09-10 00:59:03.228648 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228658 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.228667 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.228702 | orchestrator | 2025-09-10 00:59:03.228717 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.228727 | orchestrator | Wednesday 10 September 2025 00:57:16 +0000 (0:00:00.286) 0:00:05.404 *** 2025-09-10 00:59:03.228737 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.228747 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.228756 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.228766 | orchestrator | 2025-09-10 00:59:03.228775 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.228785 | orchestrator | Wednesday 10 September 2025 00:57:16 +0000 (0:00:00.357) 0:00:05.761 *** 2025-09-10 00:59:03.228794 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228804 | orchestrator | 2025-09-10 00:59:03.228813 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.228823 | orchestrator | Wednesday 10 September 2025 00:57:16 +0000 (0:00:00.134) 0:00:05.896 *** 2025-09-10 00:59:03.228839 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228848 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.228858 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.228867 | orchestrator | 2025-09-10 00:59:03.228877 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.228886 | orchestrator | Wednesday 10 September 2025 00:57:17 +0000 (0:00:00.507) 0:00:06.403 *** 2025-09-10 00:59:03.228896 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.228905 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.228915 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.228924 | orchestrator | 2025-09-10 00:59:03.228934 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.228944 | orchestrator | Wednesday 10 September 2025 00:57:17 +0000 (0:00:00.305) 0:00:06.708 *** 2025-09-10 00:59:03.228953 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.228962 | orchestrator | 2025-09-10 00:59:03.228972 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.228982 | orchestrator | Wednesday 10 September 2025 00:57:17 +0000 (0:00:00.130) 0:00:06.839 *** 2025-09-10 00:59:03.228991 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229001 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229010 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.229020 | orchestrator | 2025-09-10 00:59:03.229029 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.229039 | orchestrator | Wednesday 10 September 2025 00:57:17 +0000 (0:00:00.270) 0:00:07.110 *** 2025-09-10 00:59:03.229048 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.229058 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.229067 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.229077 | orchestrator | 2025-09-10 00:59:03.229086 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.229096 | orchestrator | Wednesday 10 September 2025 00:57:18 +0000 (0:00:00.296) 0:00:07.406 *** 2025-09-10 00:59:03.229106 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229115 | orchestrator | 2025-09-10 00:59:03.229125 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.229134 | orchestrator | Wednesday 10 September 2025 00:57:18 +0000 (0:00:00.347) 0:00:07.753 *** 2025-09-10 00:59:03.229144 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229153 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229163 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.229172 | orchestrator | 2025-09-10 00:59:03.229181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.229191 | orchestrator | Wednesday 10 September 2025 00:57:18 +0000 (0:00:00.312) 0:00:08.066 *** 2025-09-10 00:59:03.229201 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.229210 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.229220 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.229229 | orchestrator | 2025-09-10 00:59:03.229239 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.229249 | orchestrator | Wednesday 10 September 2025 00:57:19 +0000 (0:00:00.294) 0:00:08.361 *** 2025-09-10 00:59:03.229258 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229268 | orchestrator | 2025-09-10 00:59:03.229277 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.229287 | orchestrator | Wednesday 10 September 2025 00:57:19 +0000 (0:00:00.123) 0:00:08.484 *** 2025-09-10 00:59:03.229296 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229306 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229315 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.229325 | orchestrator | 2025-09-10 00:59:03.229334 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.229344 | orchestrator | Wednesday 10 September 2025 00:57:19 +0000 (0:00:00.286) 0:00:08.771 *** 2025-09-10 00:59:03.229353 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.229369 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.229379 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.229388 | orchestrator | 2025-09-10 00:59:03.229403 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.229413 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:00.500) 0:00:09.272 *** 2025-09-10 00:59:03.229422 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229432 | orchestrator | 2025-09-10 00:59:03.229441 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.229451 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:00.129) 0:00:09.401 *** 2025-09-10 00:59:03.229460 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229469 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229479 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.229488 | orchestrator | 2025-09-10 00:59:03.229498 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.229507 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:00.318) 0:00:09.719 *** 2025-09-10 00:59:03.229517 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.229526 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.229536 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.229545 | orchestrator | 2025-09-10 00:59:03.229555 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.229564 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:00.311) 0:00:10.031 *** 2025-09-10 00:59:03.229583 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229593 | orchestrator | 2025-09-10 00:59:03.229603 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.229612 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:00.131) 0:00:10.162 *** 2025-09-10 00:59:03.229622 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229631 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229641 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.229650 | orchestrator | 2025-09-10 00:59:03.229660 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.229669 | orchestrator | Wednesday 10 September 2025 00:57:21 +0000 (0:00:00.290) 0:00:10.453 *** 2025-09-10 00:59:03.229717 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.229729 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.229738 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.229748 | orchestrator | 2025-09-10 00:59:03.229757 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.229767 | orchestrator | Wednesday 10 September 2025 00:57:21 +0000 (0:00:00.535) 0:00:10.989 *** 2025-09-10 00:59:03.229776 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229786 | orchestrator | 2025-09-10 00:59:03.229795 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.229805 | orchestrator | Wednesday 10 September 2025 00:57:21 +0000 (0:00:00.127) 0:00:11.117 *** 2025-09-10 00:59:03.229814 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229824 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229833 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.229842 | orchestrator | 2025-09-10 00:59:03.229852 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-10 00:59:03.229862 | orchestrator | Wednesday 10 September 2025 00:57:22 +0000 (0:00:00.277) 0:00:11.394 *** 2025-09-10 00:59:03.229871 | orchestrator | ok: [testbed-node-0] 2025-09-10 00:59:03.229881 | orchestrator | ok: [testbed-node-1] 2025-09-10 00:59:03.229890 | orchestrator | ok: [testbed-node-2] 2025-09-10 00:59:03.229900 | orchestrator | 2025-09-10 00:59:03.229909 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-10 00:59:03.229919 | orchestrator | Wednesday 10 September 2025 00:57:22 +0000 (0:00:00.312) 0:00:11.706 *** 2025-09-10 00:59:03.229928 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229938 | orchestrator | 2025-09-10 00:59:03.229953 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-10 00:59:03.229963 | orchestrator | Wednesday 10 September 2025 00:57:22 +0000 (0:00:00.130) 0:00:11.837 *** 2025-09-10 00:59:03.229972 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.229982 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.229991 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.230000 | orchestrator | 2025-09-10 00:59:03.230010 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-10 00:59:03.230060 | orchestrator | Wednesday 10 September 2025 00:57:23 +0000 (0:00:00.472) 0:00:12.309 *** 2025-09-10 00:59:03.230070 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:59:03.230080 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:59:03.230090 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:59:03.230099 | orchestrator | 2025-09-10 00:59:03.230108 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-10 00:59:03.230118 | orchestrator | Wednesday 10 September 2025 00:57:24 +0000 (0:00:01.677) 0:00:13.986 *** 2025-09-10 00:59:03.230127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-10 00:59:03.230137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-10 00:59:03.230147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-10 00:59:03.230156 | orchestrator | 2025-09-10 00:59:03.230166 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-10 00:59:03.230175 | orchestrator | Wednesday 10 September 2025 00:57:26 +0000 (0:00:02.137) 0:00:16.124 *** 2025-09-10 00:59:03.230185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-10 00:59:03.230194 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-10 00:59:03.230204 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-10 00:59:03.230213 | orchestrator | 2025-09-10 00:59:03.230223 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-10 00:59:03.230233 | orchestrator | Wednesday 10 September 2025 00:57:28 +0000 (0:00:01.958) 0:00:18.082 *** 2025-09-10 00:59:03.230249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-10 00:59:03.230259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-10 00:59:03.230268 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-10 00:59:03.230278 | orchestrator | 2025-09-10 00:59:03.230287 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-10 00:59:03.230297 | orchestrator | Wednesday 10 September 2025 00:57:30 +0000 (0:00:02.072) 0:00:20.155 *** 2025-09-10 00:59:03.230306 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.230316 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.230325 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.230335 | orchestrator | 2025-09-10 00:59:03.230344 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-10 00:59:03.230354 | orchestrator | Wednesday 10 September 2025 00:57:31 +0000 (0:00:00.281) 0:00:20.436 *** 2025-09-10 00:59:03.230363 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.230373 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.230382 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.230391 | orchestrator | 2025-09-10 00:59:03.230406 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-10 00:59:03.230416 | orchestrator | Wednesday 10 September 2025 00:57:31 +0000 (0:00:00.292) 0:00:20.728 *** 2025-09-10 00:59:03.230425 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:59:03.230441 | orchestrator | 2025-09-10 00:59:03.230451 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-10 00:59:03.230461 | orchestrator | Wednesday 10 September 2025 00:57:32 +0000 (0:00:00.550) 0:00:21.279 *** 2025-09-10 00:59:03.230472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.230497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.230515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.230525 | orchestrator | 2025-09-10 00:59:03.230535 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-10 00:59:03.230545 | orchestrator | Wednesday 10 September 2025 00:57:33 +0000 (0:00:01.652) 0:00:22.931 *** 2025-09-10 00:59:03.230567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:59:03.230676 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.230707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:59:03.230724 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.230741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:59:03.230758 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.230767 | orchestrator | 2025-09-10 00:59:03.230777 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-10 00:59:03.230786 | orchestrator | Wednesday 10 September 2025 00:57:34 +0000 (0:00:00.622) 0:00:23.554 *** 2025-09-10 00:59:03.230803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:59:03.230814 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.230829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:59:03.230845 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.230862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-10 00:59:03.230879 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.230888 | orchestrator | 2025-09-10 00:59:03.230898 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-10 00:59:03.230911 | orchestrator | Wednesday 10 September 2025 00:57:35 +0000 (0:00:00.813) 0:00:24.367 *** 2025-09-10 00:59:03.230922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.230945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.230966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-10 00:59:03.230977 | orchestrator | 2025-09-10 00:59:03.230987 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-10 00:59:03.230996 | orchestrator | Wednesday 10 September 2025 00:57:36 +0000 (0:00:01.768) 0:00:26.135 *** 2025-09-10 00:59:03.231061 | orchestrator | skipping: [testbed-node-0] 2025-09-10 00:59:03.231072 | orchestrator | skipping: [testbed-node-1] 2025-09-10 00:59:03.231082 | orchestrator | skipping: [testbed-node-2] 2025-09-10 00:59:03.231091 | orchestrator | 2025-09-10 00:59:03.231101 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-10 00:59:03.231110 | orchestrator | Wednesday 10 September 2025 00:57:37 +0000 (0:00:00.286) 0:00:26.421 *** 2025-09-10 00:59:03.231120 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 00:59:03.231130 | orchestrator | 2025-09-10 00:59:03.231139 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-10 00:59:03.231149 | orchestrator | Wednesday 10 September 2025 00:57:37 +0000 (0:00:00.510) 0:00:26.932 *** 2025-09-10 00:59:03.231166 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:59:03.231175 | orchestrator | 2025-09-10 00:59:03.231191 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-10 00:59:03.231201 | orchestrator | Wednesday 10 September 2025 00:57:39 +0000 (0:00:02.302) 0:00:29.234 *** 2025-09-10 00:59:03.231210 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:59:03.231220 | orchestrator | 2025-09-10 00:59:03.231229 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-10 00:59:03.231239 | orchestrator | Wednesday 10 September 2025 00:57:42 +0000 (0:00:02.596) 0:00:31.831 *** 2025-09-10 00:59:03.231248 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:59:03.231258 | orchestrator | 2025-09-10 00:59:03.231268 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-10 00:59:03.231277 | orchestrator | Wednesday 10 September 2025 00:57:58 +0000 (0:00:16.216) 0:00:48.047 *** 2025-09-10 00:59:03.231287 | orchestrator | 2025-09-10 00:59:03.231296 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-10 00:59:03.231306 | orchestrator | Wednesday 10 September 2025 00:57:58 +0000 (0:00:00.064) 0:00:48.111 *** 2025-09-10 00:59:03.231315 | orchestrator | 2025-09-10 00:59:03.231325 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-10 00:59:03.231334 | orchestrator | Wednesday 10 September 2025 00:57:58 +0000 (0:00:00.061) 0:00:48.173 *** 2025-09-10 00:59:03.231344 | orchestrator | 2025-09-10 00:59:03.231358 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-10 00:59:03.231368 | orchestrator | Wednesday 10 September 2025 00:57:58 +0000 (0:00:00.068) 0:00:48.241 *** 2025-09-10 00:59:03.231377 | orchestrator | changed: [testbed-node-0] 2025-09-10 00:59:03.231387 | orchestrator | changed: [testbed-node-1] 2025-09-10 00:59:03.231397 | orchestrator | changed: [testbed-node-2] 2025-09-10 00:59:03.231406 | orchestrator | 2025-09-10 00:59:03.231416 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 00:59:03.231426 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-10 00:59:03.231436 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-10 00:59:03.231445 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-10 00:59:03.231455 | orchestrator | 2025-09-10 00:59:03.231464 | orchestrator | 2025-09-10 00:59:03.231474 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 00:59:03.231483 | orchestrator | Wednesday 10 September 2025 00:59:00 +0000 (0:01:01.910) 0:01:50.151 *** 2025-09-10 00:59:03.231493 | orchestrator | =============================================================================== 2025-09-10 00:59:03.231502 | orchestrator | horizon : Restart horizon container ------------------------------------ 61.91s 2025-09-10 00:59:03.231512 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.22s 2025-09-10 00:59:03.231521 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.60s 2025-09-10 00:59:03.231530 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.30s 2025-09-10 00:59:03.231540 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.14s 2025-09-10 00:59:03.231549 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.07s 2025-09-10 00:59:03.231559 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.96s 2025-09-10 00:59:03.231568 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.77s 2025-09-10 00:59:03.231577 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.68s 2025-09-10 00:59:03.231587 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.65s 2025-09-10 00:59:03.231602 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2025-09-10 00:59:03.231611 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2025-09-10 00:59:03.231621 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-09-10 00:59:03.231630 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.62s 2025-09-10 00:59:03.231639 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2025-09-10 00:59:03.231649 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-09-10 00:59:03.231658 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-09-10 00:59:03.231668 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-10 00:59:03.231691 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-10 00:59:03.231703 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2025-09-10 00:59:03.231715 | orchestrator | 2025-09-10 00:59:03 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:03.231727 | orchestrator | 2025-09-10 00:59:03 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:03.231738 | orchestrator | 2025-09-10 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:06.275009 | orchestrator | 2025-09-10 00:59:06 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:06.276369 | orchestrator | 2025-09-10 00:59:06 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:06.276399 | orchestrator | 2025-09-10 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:09.321981 | orchestrator | 2025-09-10 00:59:09 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:09.323890 | orchestrator | 2025-09-10 00:59:09 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:09.323920 | orchestrator | 2025-09-10 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:12.379800 | orchestrator | 2025-09-10 00:59:12 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:12.382283 | orchestrator | 2025-09-10 00:59:12 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:12.382313 | orchestrator | 2025-09-10 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:15.423981 | orchestrator | 2025-09-10 00:59:15 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:15.424744 | orchestrator | 2025-09-10 00:59:15 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:15.424772 | orchestrator | 2025-09-10 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:18.465210 | orchestrator | 2025-09-10 00:59:18 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:18.465938 | orchestrator | 2025-09-10 00:59:18 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:18.466160 | orchestrator | 2025-09-10 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:21.512199 | orchestrator | 2025-09-10 00:59:21 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:21.514329 | orchestrator | 2025-09-10 00:59:21 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:21.514363 | orchestrator | 2025-09-10 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:24.557698 | orchestrator | 2025-09-10 00:59:24 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:24.559945 | orchestrator | 2025-09-10 00:59:24 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:24.559977 | orchestrator | 2025-09-10 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:27.597547 | orchestrator | 2025-09-10 00:59:27 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:27.599021 | orchestrator | 2025-09-10 00:59:27 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:27.599055 | orchestrator | 2025-09-10 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:30.646887 | orchestrator | 2025-09-10 00:59:30 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:30.648336 | orchestrator | 2025-09-10 00:59:30 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:30.648369 | orchestrator | 2025-09-10 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:33.693387 | orchestrator | 2025-09-10 00:59:33 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:33.695354 | orchestrator | 2025-09-10 00:59:33 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:33.695658 | orchestrator | 2025-09-10 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:36.741353 | orchestrator | 2025-09-10 00:59:36 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:36.744034 | orchestrator | 2025-09-10 00:59:36 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:36.744056 | orchestrator | 2025-09-10 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:39.795275 | orchestrator | 2025-09-10 00:59:39 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:39.797325 | orchestrator | 2025-09-10 00:59:39 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:39.797645 | orchestrator | 2025-09-10 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:42.854821 | orchestrator | 2025-09-10 00:59:42 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:42.856505 | orchestrator | 2025-09-10 00:59:42 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:42.856535 | orchestrator | 2025-09-10 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:45.911318 | orchestrator | 2025-09-10 00:59:45 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:45.912162 | orchestrator | 2025-09-10 00:59:45 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:45.912194 | orchestrator | 2025-09-10 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:48.954361 | orchestrator | 2025-09-10 00:59:48 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:48.955885 | orchestrator | 2025-09-10 00:59:48 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state STARTED 2025-09-10 00:59:48.955917 | orchestrator | 2025-09-10 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:52.002969 | orchestrator | 2025-09-10 00:59:52 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 00:59:52.003128 | orchestrator | 2025-09-10 00:59:52 | INFO  | Task e1a65212-f345-43da-ac08-7ce109309d58 is in state STARTED 2025-09-10 00:59:52.004011 | orchestrator | 2025-09-10 00:59:52 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:52.006536 | orchestrator | 2025-09-10 00:59:52 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 00:59:52.006646 | orchestrator | 2025-09-10 00:59:52 | INFO  | Task 066f7f16-915e-4131-8f34-6925f458478d is in state SUCCESS 2025-09-10 00:59:52.006666 | orchestrator | 2025-09-10 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:55.054400 | orchestrator | 2025-09-10 00:59:55 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 00:59:55.054541 | orchestrator | 2025-09-10 00:59:55 | INFO  | Task e1a65212-f345-43da-ac08-7ce109309d58 is in state STARTED 2025-09-10 00:59:55.057381 | orchestrator | 2025-09-10 00:59:55 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:55.057992 | orchestrator | 2025-09-10 00:59:55 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 00:59:55.058223 | orchestrator | 2025-09-10 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 00:59:58.098125 | orchestrator | 2025-09-10 00:59:58 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 00:59:58.098338 | orchestrator | 2025-09-10 00:59:58 | INFO  | Task e1a65212-f345-43da-ac08-7ce109309d58 is in state SUCCESS 2025-09-10 00:59:58.098372 | orchestrator | 2025-09-10 00:59:58 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 00:59:58.099577 | orchestrator | 2025-09-10 00:59:58 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state STARTED 2025-09-10 00:59:58.100261 | orchestrator | 2025-09-10 00:59:58 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 00:59:58.101567 | orchestrator | 2025-09-10 00:59:58 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 00:59:58.101589 | orchestrator | 2025-09-10 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:01.145615 | orchestrator | 2025-09-10 01:00:01 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:01.146698 | orchestrator | 2025-09-10 01:00:01 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:01.149112 | orchestrator | 2025-09-10 01:00:01 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:01.150226 | orchestrator | 2025-09-10 01:00:01 | INFO  | Task 346e6093-dde8-4ac1-91af-d10c1f2d0fff is in state SUCCESS 2025-09-10 01:00:01.151619 | orchestrator | 2025-09-10 01:00:01.151637 | orchestrator | 2025-09-10 01:00:01.151642 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-10 01:00:01.151648 | orchestrator | 2025-09-10 01:00:01.151653 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-10 01:00:01.151658 | orchestrator | Wednesday 10 September 2025 00:58:57 +0000 (0:00:00.255) 0:00:00.255 *** 2025-09-10 01:00:01.151664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-10 01:00:01.151670 | orchestrator | 2025-09-10 01:00:01.151675 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-10 01:00:01.151679 | orchestrator | Wednesday 10 September 2025 00:58:57 +0000 (0:00:00.256) 0:00:00.511 *** 2025-09-10 01:00:01.151685 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-10 01:00:01.151690 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-10 01:00:01.151696 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-10 01:00:01.151701 | orchestrator | 2025-09-10 01:00:01.151742 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-10 01:00:01.151746 | orchestrator | Wednesday 10 September 2025 00:58:58 +0000 (0:00:01.219) 0:00:01.731 *** 2025-09-10 01:00:01.151750 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-10 01:00:01.151754 | orchestrator | 2025-09-10 01:00:01.151758 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-10 01:00:01.151761 | orchestrator | Wednesday 10 September 2025 00:59:00 +0000 (0:00:01.077) 0:00:02.809 *** 2025-09-10 01:00:01.151765 | orchestrator | changed: [testbed-manager] 2025-09-10 01:00:01.151769 | orchestrator | 2025-09-10 01:00:01.151773 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-10 01:00:01.151777 | orchestrator | Wednesday 10 September 2025 00:59:01 +0000 (0:00:01.156) 0:00:03.965 *** 2025-09-10 01:00:01.151780 | orchestrator | changed: [testbed-manager] 2025-09-10 01:00:01.151784 | orchestrator | 2025-09-10 01:00:01.151788 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-10 01:00:01.151791 | orchestrator | Wednesday 10 September 2025 00:59:02 +0000 (0:00:00.879) 0:00:04.845 *** 2025-09-10 01:00:01.151809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-10 01:00:01.151813 | orchestrator | ok: [testbed-manager] 2025-09-10 01:00:01.151817 | orchestrator | 2025-09-10 01:00:01.151821 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-10 01:00:01.151825 | orchestrator | Wednesday 10 September 2025 00:59:39 +0000 (0:00:37.002) 0:00:41.847 *** 2025-09-10 01:00:01.151829 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-10 01:00:01.151833 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-10 01:00:01.151836 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-10 01:00:01.151840 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-10 01:00:01.151844 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-10 01:00:01.151847 | orchestrator | 2025-09-10 01:00:01.151851 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-10 01:00:01.151855 | orchestrator | Wednesday 10 September 2025 00:59:43 +0000 (0:00:04.153) 0:00:46.001 *** 2025-09-10 01:00:01.151859 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-10 01:00:01.151862 | orchestrator | 2025-09-10 01:00:01.151866 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-10 01:00:01.151870 | orchestrator | Wednesday 10 September 2025 00:59:43 +0000 (0:00:00.510) 0:00:46.511 *** 2025-09-10 01:00:01.151873 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:00:01.151877 | orchestrator | 2025-09-10 01:00:01.151881 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-10 01:00:01.151884 | orchestrator | Wednesday 10 September 2025 00:59:43 +0000 (0:00:00.124) 0:00:46.635 *** 2025-09-10 01:00:01.151888 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:00:01.151892 | orchestrator | 2025-09-10 01:00:01.151896 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-10 01:00:01.151899 | orchestrator | Wednesday 10 September 2025 00:59:44 +0000 (0:00:00.316) 0:00:46.952 *** 2025-09-10 01:00:01.151903 | orchestrator | changed: [testbed-manager] 2025-09-10 01:00:01.151907 | orchestrator | 2025-09-10 01:00:01.151910 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-10 01:00:01.151914 | orchestrator | Wednesday 10 September 2025 00:59:46 +0000 (0:00:02.006) 0:00:48.958 *** 2025-09-10 01:00:01.151918 | orchestrator | changed: [testbed-manager] 2025-09-10 01:00:01.151921 | orchestrator | 2025-09-10 01:00:01.151925 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-10 01:00:01.151929 | orchestrator | Wednesday 10 September 2025 00:59:47 +0000 (0:00:00.842) 0:00:49.801 *** 2025-09-10 01:00:01.151933 | orchestrator | changed: [testbed-manager] 2025-09-10 01:00:01.151936 | orchestrator | 2025-09-10 01:00:01.151945 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-10 01:00:01.151949 | orchestrator | Wednesday 10 September 2025 00:59:47 +0000 (0:00:00.676) 0:00:50.477 *** 2025-09-10 01:00:01.151952 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-10 01:00:01.151956 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-10 01:00:01.151960 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-10 01:00:01.151964 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-10 01:00:01.152005 | orchestrator | 2025-09-10 01:00:01.152009 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:00:01.152013 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-10 01:00:01.152018 | orchestrator | 2025-09-10 01:00:01.152022 | orchestrator | 2025-09-10 01:00:01.152033 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:00:01.152038 | orchestrator | Wednesday 10 September 2025 00:59:49 +0000 (0:00:01.540) 0:00:52.018 *** 2025-09-10 01:00:01.152041 | orchestrator | =============================================================================== 2025-09-10 01:00:01.152045 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.00s 2025-09-10 01:00:01.152049 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.15s 2025-09-10 01:00:01.152052 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.01s 2025-09-10 01:00:01.152056 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.54s 2025-09-10 01:00:01.152060 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-09-10 01:00:01.152064 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.16s 2025-09-10 01:00:01.152068 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.08s 2025-09-10 01:00:01.152114 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-09-10 01:00:01.152120 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-09-10 01:00:01.152124 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.68s 2025-09-10 01:00:01.152127 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2025-09-10 01:00:01.152131 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-09-10 01:00:01.152135 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-09-10 01:00:01.152138 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-09-10 01:00:01.152142 | orchestrator | 2025-09-10 01:00:01.152146 | orchestrator | 2025-09-10 01:00:01.152149 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:00:01.152153 | orchestrator | 2025-09-10 01:00:01.152157 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:00:01.152160 | orchestrator | Wednesday 10 September 2025 00:59:53 +0000 (0:00:00.216) 0:00:00.216 *** 2025-09-10 01:00:01.152164 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.152168 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.152172 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.152175 | orchestrator | 2025-09-10 01:00:01.152183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:00:01.152187 | orchestrator | Wednesday 10 September 2025 00:59:54 +0000 (0:00:00.320) 0:00:00.536 *** 2025-09-10 01:00:01.152191 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-10 01:00:01.152195 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-10 01:00:01.152199 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-10 01:00:01.152202 | orchestrator | 2025-09-10 01:00:01.152206 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-10 01:00:01.152210 | orchestrator | 2025-09-10 01:00:01.152213 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-10 01:00:01.152221 | orchestrator | Wednesday 10 September 2025 00:59:54 +0000 (0:00:00.804) 0:00:01.341 *** 2025-09-10 01:00:01.152225 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.152229 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.152232 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.152236 | orchestrator | 2025-09-10 01:00:01.152240 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:00:01.152245 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:00:01.152249 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:00:01.152253 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:00:01.152256 | orchestrator | 2025-09-10 01:00:01.152260 | orchestrator | 2025-09-10 01:00:01.152264 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:00:01.152267 | orchestrator | Wednesday 10 September 2025 00:59:55 +0000 (0:00:00.965) 0:00:02.307 *** 2025-09-10 01:00:01.152271 | orchestrator | =============================================================================== 2025-09-10 01:00:01.152275 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.97s 2025-09-10 01:00:01.152278 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-09-10 01:00:01.152282 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-10 01:00:01.152286 | orchestrator | 2025-09-10 01:00:01.152289 | orchestrator | 2025-09-10 01:00:01.152293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:00:01.152297 | orchestrator | 2025-09-10 01:00:01.152300 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:00:01.152304 | orchestrator | Wednesday 10 September 2025 00:57:11 +0000 (0:00:00.262) 0:00:00.263 *** 2025-09-10 01:00:01.152308 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.152311 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.152315 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.152319 | orchestrator | 2025-09-10 01:00:01.152322 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:00:01.152326 | orchestrator | Wednesday 10 September 2025 00:57:11 +0000 (0:00:00.315) 0:00:00.578 *** 2025-09-10 01:00:01.152330 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-10 01:00:01.152334 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-10 01:00:01.152338 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-10 01:00:01.152341 | orchestrator | 2025-09-10 01:00:01.152345 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-10 01:00:01.152349 | orchestrator | 2025-09-10 01:00:01.152355 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-10 01:00:01.152588 | orchestrator | Wednesday 10 September 2025 00:57:11 +0000 (0:00:00.428) 0:00:01.007 *** 2025-09-10 01:00:01.152595 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:00:01.152600 | orchestrator | 2025-09-10 01:00:01.152604 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-10 01:00:01.152608 | orchestrator | Wednesday 10 September 2025 00:57:12 +0000 (0:00:00.516) 0:00:01.524 *** 2025-09-10 01:00:01.152616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.152635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.152640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.152645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152697 | orchestrator | 2025-09-10 01:00:01.152700 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-10 01:00:01.152704 | orchestrator | Wednesday 10 September 2025 00:57:14 +0000 (0:00:01.811) 0:00:03.335 *** 2025-09-10 01:00:01.152708 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-10 01:00:01.152712 | orchestrator | 2025-09-10 01:00:01.152716 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-10 01:00:01.152720 | orchestrator | Wednesday 10 September 2025 00:57:15 +0000 (0:00:00.927) 0:00:04.262 *** 2025-09-10 01:00:01.152737 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.152742 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.152746 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.152749 | orchestrator | 2025-09-10 01:00:01.152753 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-10 01:00:01.152757 | orchestrator | Wednesday 10 September 2025 00:57:15 +0000 (0:00:00.531) 0:00:04.794 *** 2025-09-10 01:00:01.152761 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:00:01.152765 | orchestrator | 2025-09-10 01:00:01.152768 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-10 01:00:01.152772 | orchestrator | Wednesday 10 September 2025 00:57:16 +0000 (0:00:00.674) 0:00:05.468 *** 2025-09-10 01:00:01.152776 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:00:01.152780 | orchestrator | 2025-09-10 01:00:01.152786 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-10 01:00:01.152794 | orchestrator | Wednesday 10 September 2025 00:57:16 +0000 (0:00:00.507) 0:00:05.976 *** 2025-09-10 01:00:01.152799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.152806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.152810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.152815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.152852 | orchestrator | 2025-09-10 01:00:01.152855 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-10 01:00:01.152859 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:03.278) 0:00:09.254 *** 2025-09-10 01:00:01.152863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 01:00:01.152894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.152898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 01:00:01.152902 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.152913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 01:00:01.152917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.152921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 01:00:01.152925 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.152932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 01:00:01.152941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.152945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 01:00:01.152949 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.152953 | orchestrator | 2025-09-10 01:00:01.152957 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-10 01:00:01.152964 | orchestrator | Wednesday 10 September 2025 00:57:20 +0000 (0:00:00.809) 0:00:10.063 *** 2025-09-10 01:00:01.152968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 01:00:01.152972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.152979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 01:00:01.152983 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.152990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 01:00:01.152995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 01:00:01.153006 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-10 01:00:01.153018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-10 01:00:01.153029 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153033 | orchestrator | 2025-09-10 01:00:01.153036 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-10 01:00:01.153040 | orchestrator | Wednesday 10 September 2025 00:57:21 +0000 (0:00:00.737) 0:00:10.800 *** 2025-09-10 01:00:01.153044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153094 | orchestrator | 2025-09-10 01:00:01.153098 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-10 01:00:01.153102 | orchestrator | Wednesday 10 September 2025 00:57:24 +0000 (0:00:03.147) 0:00:13.948 *** 2025-09-10 01:00:01.153109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153158 | orchestrator | 2025-09-10 01:00:01.153162 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-10 01:00:01.153166 | orchestrator | Wednesday 10 September 2025 00:57:29 +0000 (0:00:05.208) 0:00:19.157 *** 2025-09-10 01:00:01.153169 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153173 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:00:01.153177 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:00:01.153181 | orchestrator | 2025-09-10 01:00:01.153187 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-10 01:00:01.153191 | orchestrator | Wednesday 10 September 2025 00:57:31 +0000 (0:00:01.443) 0:00:20.600 *** 2025-09-10 01:00:01.153195 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153199 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153202 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153206 | orchestrator | 2025-09-10 01:00:01.153210 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-10 01:00:01.153213 | orchestrator | Wednesday 10 September 2025 00:57:31 +0000 (0:00:00.502) 0:00:21.103 *** 2025-09-10 01:00:01.153217 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153221 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153225 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153228 | orchestrator | 2025-09-10 01:00:01.153232 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-10 01:00:01.153236 | orchestrator | Wednesday 10 September 2025 00:57:32 +0000 (0:00:00.286) 0:00:21.389 *** 2025-09-10 01:00:01.153239 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153243 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153247 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153250 | orchestrator | 2025-09-10 01:00:01.153254 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-10 01:00:01.153258 | orchestrator | Wednesday 10 September 2025 00:57:32 +0000 (0:00:00.495) 0:00:21.885 *** 2025-09-10 01:00:01.153262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-10 01:00:01.153301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153316 | orchestrator | 2025-09-10 01:00:01.153320 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-10 01:00:01.153326 | orchestrator | Wednesday 10 September 2025 00:57:34 +0000 (0:00:02.313) 0:00:24.198 *** 2025-09-10 01:00:01.153330 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153334 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153338 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153342 | orchestrator | 2025-09-10 01:00:01.153345 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-10 01:00:01.153349 | orchestrator | Wednesday 10 September 2025 00:57:35 +0000 (0:00:00.304) 0:00:24.503 *** 2025-09-10 01:00:01.153353 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-10 01:00:01.153357 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-10 01:00:01.153361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-10 01:00:01.153364 | orchestrator | 2025-09-10 01:00:01.153368 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-10 01:00:01.153372 | orchestrator | Wednesday 10 September 2025 00:57:37 +0000 (0:00:01.858) 0:00:26.362 *** 2025-09-10 01:00:01.153375 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:00:01.153379 | orchestrator | 2025-09-10 01:00:01.153383 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-10 01:00:01.153386 | orchestrator | Wednesday 10 September 2025 00:57:38 +0000 (0:00:00.915) 0:00:27.278 *** 2025-09-10 01:00:01.153390 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153394 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153397 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153401 | orchestrator | 2025-09-10 01:00:01.153405 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-10 01:00:01.153408 | orchestrator | Wednesday 10 September 2025 00:57:38 +0000 (0:00:00.776) 0:00:28.054 *** 2025-09-10 01:00:01.153412 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-10 01:00:01.153416 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:00:01.153419 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-10 01:00:01.153423 | orchestrator | 2025-09-10 01:00:01.153427 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-10 01:00:01.153431 | orchestrator | Wednesday 10 September 2025 00:57:39 +0000 (0:00:01.025) 0:00:29.080 *** 2025-09-10 01:00:01.153434 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.153438 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.153442 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.153445 | orchestrator | 2025-09-10 01:00:01.153449 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-10 01:00:01.153453 | orchestrator | Wednesday 10 September 2025 00:57:40 +0000 (0:00:00.307) 0:00:29.388 *** 2025-09-10 01:00:01.153456 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-10 01:00:01.153460 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-10 01:00:01.153464 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-10 01:00:01.153468 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-10 01:00:01.153471 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-10 01:00:01.153477 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-10 01:00:01.153484 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-10 01:00:01.153488 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-10 01:00:01.153492 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-10 01:00:01.153495 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-10 01:00:01.153499 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-10 01:00:01.153503 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-10 01:00:01.153506 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-10 01:00:01.153510 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-10 01:00:01.153514 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-10 01:00:01.153517 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-10 01:00:01.153521 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-10 01:00:01.153525 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-10 01:00:01.153528 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-10 01:00:01.153532 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-10 01:00:01.153536 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-10 01:00:01.153539 | orchestrator | 2025-09-10 01:00:01.153543 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-10 01:00:01.153549 | orchestrator | Wednesday 10 September 2025 00:57:49 +0000 (0:00:08.960) 0:00:38.348 *** 2025-09-10 01:00:01.153553 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-10 01:00:01.153557 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-10 01:00:01.153560 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-10 01:00:01.153564 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-10 01:00:01.153568 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-10 01:00:01.153571 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-10 01:00:01.153575 | orchestrator | 2025-09-10 01:00:01.153579 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-10 01:00:01.153582 | orchestrator | Wednesday 10 September 2025 00:57:52 +0000 (0:00:02.921) 0:00:41.269 *** 2025-09-10 01:00:01.153586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-10 01:00:01.153612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-10 01:00:01.153643 | orchestrator | 2025-09-10 01:00:01.153647 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-10 01:00:01.153650 | orchestrator | Wednesday 10 September 2025 00:57:54 +0000 (0:00:02.315) 0:00:43.585 *** 2025-09-10 01:00:01.153654 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153658 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153662 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153665 | orchestrator | 2025-09-10 01:00:01.153669 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-10 01:00:01.153673 | orchestrator | Wednesday 10 September 2025 00:57:54 +0000 (0:00:00.335) 0:00:43.920 *** 2025-09-10 01:00:01.153676 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153680 | orchestrator | 2025-09-10 01:00:01.153684 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-10 01:00:01.153688 | orchestrator | Wednesday 10 September 2025 00:57:56 +0000 (0:00:02.240) 0:00:46.160 *** 2025-09-10 01:00:01.153691 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153695 | orchestrator | 2025-09-10 01:00:01.153699 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-10 01:00:01.153705 | orchestrator | Wednesday 10 September 2025 00:57:59 +0000 (0:00:02.270) 0:00:48.431 *** 2025-09-10 01:00:01.153709 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.153713 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.153716 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.153720 | orchestrator | 2025-09-10 01:00:01.153732 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-10 01:00:01.153736 | orchestrator | Wednesday 10 September 2025 00:58:00 +0000 (0:00:00.945) 0:00:49.376 *** 2025-09-10 01:00:01.153740 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.153743 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.153747 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.153751 | orchestrator | 2025-09-10 01:00:01.153754 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-10 01:00:01.153761 | orchestrator | Wednesday 10 September 2025 00:58:00 +0000 (0:00:00.653) 0:00:50.029 *** 2025-09-10 01:00:01.153765 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.153769 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.153773 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.153776 | orchestrator | 2025-09-10 01:00:01.153780 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-10 01:00:01.153784 | orchestrator | Wednesday 10 September 2025 00:58:01 +0000 (0:00:00.345) 0:00:50.375 *** 2025-09-10 01:00:01.153787 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153791 | orchestrator | 2025-09-10 01:00:01.153795 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-10 01:00:01.153798 | orchestrator | Wednesday 10 September 2025 00:58:14 +0000 (0:00:13.614) 0:01:03.989 *** 2025-09-10 01:00:01.153802 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153806 | orchestrator | 2025-09-10 01:00:01.153810 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-10 01:00:01.153813 | orchestrator | Wednesday 10 September 2025 00:58:24 +0000 (0:00:09.359) 0:01:13.349 *** 2025-09-10 01:00:01.153817 | orchestrator | 2025-09-10 01:00:01.153821 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-10 01:00:01.153824 | orchestrator | Wednesday 10 September 2025 00:58:24 +0000 (0:00:00.064) 0:01:13.413 *** 2025-09-10 01:00:01.153828 | orchestrator | 2025-09-10 01:00:01.153832 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-10 01:00:01.153835 | orchestrator | Wednesday 10 September 2025 00:58:24 +0000 (0:00:00.064) 0:01:13.477 *** 2025-09-10 01:00:01.153839 | orchestrator | 2025-09-10 01:00:01.153843 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-10 01:00:01.153846 | orchestrator | Wednesday 10 September 2025 00:58:24 +0000 (0:00:00.067) 0:01:13.545 *** 2025-09-10 01:00:01.153850 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153854 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:00:01.153858 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:00:01.153861 | orchestrator | 2025-09-10 01:00:01.153865 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-10 01:00:01.153869 | orchestrator | Wednesday 10 September 2025 00:58:51 +0000 (0:00:26.703) 0:01:40.248 *** 2025-09-10 01:00:01.153872 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:00:01.153876 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:00:01.153880 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153884 | orchestrator | 2025-09-10 01:00:01.153887 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-10 01:00:01.153891 | orchestrator | Wednesday 10 September 2025 00:58:58 +0000 (0:00:07.512) 0:01:47.760 *** 2025-09-10 01:00:01.153895 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153898 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:00:01.153904 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:00:01.153908 | orchestrator | 2025-09-10 01:00:01.153912 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-10 01:00:01.153915 | orchestrator | Wednesday 10 September 2025 00:59:11 +0000 (0:00:12.583) 0:02:00.344 *** 2025-09-10 01:00:01.153919 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:00:01.153923 | orchestrator | 2025-09-10 01:00:01.153927 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-10 01:00:01.153930 | orchestrator | Wednesday 10 September 2025 00:59:11 +0000 (0:00:00.688) 0:02:01.032 *** 2025-09-10 01:00:01.153934 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:00:01.153938 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.153941 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:00:01.153945 | orchestrator | 2025-09-10 01:00:01.153949 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-10 01:00:01.153952 | orchestrator | Wednesday 10 September 2025 00:59:12 +0000 (0:00:00.733) 0:02:01.766 *** 2025-09-10 01:00:01.153959 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:00:01.153963 | orchestrator | 2025-09-10 01:00:01.153967 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-10 01:00:01.153971 | orchestrator | Wednesday 10 September 2025 00:59:14 +0000 (0:00:01.791) 0:02:03.558 *** 2025-09-10 01:00:01.153974 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-10 01:00:01.153978 | orchestrator | 2025-09-10 01:00:01.153982 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-10 01:00:01.153985 | orchestrator | Wednesday 10 September 2025 00:59:24 +0000 (0:00:10.075) 0:02:13.634 *** 2025-09-10 01:00:01.153989 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-10 01:00:01.153993 | orchestrator | 2025-09-10 01:00:01.153996 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-10 01:00:01.154000 | orchestrator | Wednesday 10 September 2025 00:59:45 +0000 (0:00:21.362) 0:02:34.996 *** 2025-09-10 01:00:01.154004 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-10 01:00:01.154007 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-10 01:00:01.154011 | orchestrator | 2025-09-10 01:00:01.154046 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-10 01:00:01.154054 | orchestrator | Wednesday 10 September 2025 00:59:52 +0000 (0:00:06.787) 0:02:41.784 *** 2025-09-10 01:00:01.154058 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.154062 | orchestrator | 2025-09-10 01:00:01.154066 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-10 01:00:01.154069 | orchestrator | Wednesday 10 September 2025 00:59:52 +0000 (0:00:00.122) 0:02:41.907 *** 2025-09-10 01:00:01.154073 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.154077 | orchestrator | 2025-09-10 01:00:01.154081 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-10 01:00:01.154084 | orchestrator | Wednesday 10 September 2025 00:59:52 +0000 (0:00:00.114) 0:02:42.022 *** 2025-09-10 01:00:01.154088 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.154092 | orchestrator | 2025-09-10 01:00:01.154095 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-10 01:00:01.154099 | orchestrator | Wednesday 10 September 2025 00:59:52 +0000 (0:00:00.114) 0:02:42.136 *** 2025-09-10 01:00:01.154103 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.154107 | orchestrator | 2025-09-10 01:00:01.154110 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-10 01:00:01.154114 | orchestrator | Wednesday 10 September 2025 00:59:53 +0000 (0:00:00.533) 0:02:42.669 *** 2025-09-10 01:00:01.154118 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:00:01.154121 | orchestrator | 2025-09-10 01:00:01.154125 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-10 01:00:01.154129 | orchestrator | Wednesday 10 September 2025 00:59:56 +0000 (0:00:03.238) 0:02:45.908 *** 2025-09-10 01:00:01.154133 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:00:01.154136 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:00:01.154140 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:00:01.154144 | orchestrator | 2025-09-10 01:00:01.154147 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:00:01.154151 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-10 01:00:01.154156 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-10 01:00:01.154160 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-10 01:00:01.154164 | orchestrator | 2025-09-10 01:00:01.154167 | orchestrator | 2025-09-10 01:00:01.154175 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:00:01.154178 | orchestrator | Wednesday 10 September 2025 00:59:57 +0000 (0:00:00.833) 0:02:46.741 *** 2025-09-10 01:00:01.154182 | orchestrator | =============================================================================== 2025-09-10 01:00:01.154186 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 26.70s 2025-09-10 01:00:01.154189 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.36s 2025-09-10 01:00:01.154193 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.61s 2025-09-10 01:00:01.154197 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.58s 2025-09-10 01:00:01.154200 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.08s 2025-09-10 01:00:01.154206 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.36s 2025-09-10 01:00:01.154210 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.96s 2025-09-10 01:00:01.154214 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.51s 2025-09-10 01:00:01.154218 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.79s 2025-09-10 01:00:01.154221 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.21s 2025-09-10 01:00:01.154225 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2025-09-10 01:00:01.154229 | orchestrator | keystone : Creating default user role ----------------------------------- 3.24s 2025-09-10 01:00:01.154232 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2025-09-10 01:00:01.154236 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.92s 2025-09-10 01:00:01.154240 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.32s 2025-09-10 01:00:01.154243 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.31s 2025-09-10 01:00:01.154247 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.27s 2025-09-10 01:00:01.154251 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.24s 2025-09-10 01:00:01.154255 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.86s 2025-09-10 01:00:01.154258 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.81s 2025-09-10 01:00:01.154262 | orchestrator | 2025-09-10 01:00:01 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:01.154266 | orchestrator | 2025-09-10 01:00:01 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:01.154270 | orchestrator | 2025-09-10 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:04.203844 | orchestrator | 2025-09-10 01:00:04 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:04.204044 | orchestrator | 2025-09-10 01:00:04 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:04.204471 | orchestrator | 2025-09-10 01:00:04 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:04.205155 | orchestrator | 2025-09-10 01:00:04 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:04.206872 | orchestrator | 2025-09-10 01:00:04 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:04.206923 | orchestrator | 2025-09-10 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:07.247210 | orchestrator | 2025-09-10 01:00:07 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:07.250202 | orchestrator | 2025-09-10 01:00:07 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:07.252209 | orchestrator | 2025-09-10 01:00:07 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:07.253443 | orchestrator | 2025-09-10 01:00:07 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:07.254873 | orchestrator | 2025-09-10 01:00:07 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:07.255129 | orchestrator | 2025-09-10 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:10.294316 | orchestrator | 2025-09-10 01:00:10 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:10.297278 | orchestrator | 2025-09-10 01:00:10 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:10.299966 | orchestrator | 2025-09-10 01:00:10 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:10.302707 | orchestrator | 2025-09-10 01:00:10 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:10.304554 | orchestrator | 2025-09-10 01:00:10 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:10.304961 | orchestrator | 2025-09-10 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:13.347612 | orchestrator | 2025-09-10 01:00:13 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:13.351193 | orchestrator | 2025-09-10 01:00:13 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:13.354806 | orchestrator | 2025-09-10 01:00:13 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:13.357676 | orchestrator | 2025-09-10 01:00:13 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:13.360680 | orchestrator | 2025-09-10 01:00:13 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:13.360700 | orchestrator | 2025-09-10 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:16.400495 | orchestrator | 2025-09-10 01:00:16 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:16.401196 | orchestrator | 2025-09-10 01:00:16 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:16.402076 | orchestrator | 2025-09-10 01:00:16 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:16.402894 | orchestrator | 2025-09-10 01:00:16 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:16.403969 | orchestrator | 2025-09-10 01:00:16 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:16.403991 | orchestrator | 2025-09-10 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:19.445276 | orchestrator | 2025-09-10 01:00:19 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:19.446716 | orchestrator | 2025-09-10 01:00:19 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:19.448850 | orchestrator | 2025-09-10 01:00:19 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:19.450375 | orchestrator | 2025-09-10 01:00:19 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:19.452127 | orchestrator | 2025-09-10 01:00:19 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:19.452149 | orchestrator | 2025-09-10 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:22.501643 | orchestrator | 2025-09-10 01:00:22 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:22.501805 | orchestrator | 2025-09-10 01:00:22 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:22.501822 | orchestrator | 2025-09-10 01:00:22 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:22.501833 | orchestrator | 2025-09-10 01:00:22 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:22.501844 | orchestrator | 2025-09-10 01:00:22 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:22.501855 | orchestrator | 2025-09-10 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:25.554466 | orchestrator | 2025-09-10 01:00:25 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:25.556662 | orchestrator | 2025-09-10 01:00:25 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:25.557020 | orchestrator | 2025-09-10 01:00:25 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:25.559580 | orchestrator | 2025-09-10 01:00:25 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:25.560329 | orchestrator | 2025-09-10 01:00:25 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:25.560356 | orchestrator | 2025-09-10 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:28.636654 | orchestrator | 2025-09-10 01:00:28 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:28.637029 | orchestrator | 2025-09-10 01:00:28 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:28.637664 | orchestrator | 2025-09-10 01:00:28 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:28.638336 | orchestrator | 2025-09-10 01:00:28 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:28.639108 | orchestrator | 2025-09-10 01:00:28 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:28.639131 | orchestrator | 2025-09-10 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:31.666451 | orchestrator | 2025-09-10 01:00:31 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:31.666550 | orchestrator | 2025-09-10 01:00:31 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:31.666970 | orchestrator | 2025-09-10 01:00:31 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:31.667659 | orchestrator | 2025-09-10 01:00:31 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:31.668166 | orchestrator | 2025-09-10 01:00:31 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:31.668257 | orchestrator | 2025-09-10 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:34.691631 | orchestrator | 2025-09-10 01:00:34 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:34.691921 | orchestrator | 2025-09-10 01:00:34 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:34.693714 | orchestrator | 2025-09-10 01:00:34 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:34.694421 | orchestrator | 2025-09-10 01:00:34 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state STARTED 2025-09-10 01:00:34.694951 | orchestrator | 2025-09-10 01:00:34 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:34.695023 | orchestrator | 2025-09-10 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:37.731835 | orchestrator | 2025-09-10 01:00:37 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:37.731980 | orchestrator | 2025-09-10 01:00:37 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:37.731995 | orchestrator | 2025-09-10 01:00:37 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:37.732006 | orchestrator | 2025-09-10 01:00:37 | INFO  | Task 2fe4a5b4-d4c6-4870-a69e-cd7631b487ef is in state SUCCESS 2025-09-10 01:00:37.732017 | orchestrator | 2025-09-10 01:00:37 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:37.732028 | orchestrator | 2025-09-10 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:40.756787 | orchestrator | 2025-09-10 01:00:40 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:40.757580 | orchestrator | 2025-09-10 01:00:40 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:40.758461 | orchestrator | 2025-09-10 01:00:40 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:40.759429 | orchestrator | 2025-09-10 01:00:40 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:40.759894 | orchestrator | 2025-09-10 01:00:40 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:40.760147 | orchestrator | 2025-09-10 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:43.793976 | orchestrator | 2025-09-10 01:00:43 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:43.794187 | orchestrator | 2025-09-10 01:00:43 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:43.794205 | orchestrator | 2025-09-10 01:00:43 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:43.796686 | orchestrator | 2025-09-10 01:00:43 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:43.796715 | orchestrator | 2025-09-10 01:00:43 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:43.796726 | orchestrator | 2025-09-10 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:46.832912 | orchestrator | 2025-09-10 01:00:46 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:46.833607 | orchestrator | 2025-09-10 01:00:46 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:46.838363 | orchestrator | 2025-09-10 01:00:46 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:46.839142 | orchestrator | 2025-09-10 01:00:46 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:46.839170 | orchestrator | 2025-09-10 01:00:46 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:46.839182 | orchestrator | 2025-09-10 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:49.867360 | orchestrator | 2025-09-10 01:00:49 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:49.870919 | orchestrator | 2025-09-10 01:00:49 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:49.871531 | orchestrator | 2025-09-10 01:00:49 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:49.872206 | orchestrator | 2025-09-10 01:00:49 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:49.872901 | orchestrator | 2025-09-10 01:00:49 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:49.873051 | orchestrator | 2025-09-10 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:52.899511 | orchestrator | 2025-09-10 01:00:52 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:52.899821 | orchestrator | 2025-09-10 01:00:52 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:52.900369 | orchestrator | 2025-09-10 01:00:52 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:52.901257 | orchestrator | 2025-09-10 01:00:52 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:52.901880 | orchestrator | 2025-09-10 01:00:52 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:52.901894 | orchestrator | 2025-09-10 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:55.930656 | orchestrator | 2025-09-10 01:00:55 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:55.947347 | orchestrator | 2025-09-10 01:00:55 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:55.947384 | orchestrator | 2025-09-10 01:00:55 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:55.947397 | orchestrator | 2025-09-10 01:00:55 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:55.947410 | orchestrator | 2025-09-10 01:00:55 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:55.947439 | orchestrator | 2025-09-10 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:00:58.958411 | orchestrator | 2025-09-10 01:00:58 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:00:58.958834 | orchestrator | 2025-09-10 01:00:58 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:00:58.960612 | orchestrator | 2025-09-10 01:00:58 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:00:58.961656 | orchestrator | 2025-09-10 01:00:58 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:00:58.962144 | orchestrator | 2025-09-10 01:00:58 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:00:58.962217 | orchestrator | 2025-09-10 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:01.990659 | orchestrator | 2025-09-10 01:01:01 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:01.990824 | orchestrator | 2025-09-10 01:01:01 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:01.991476 | orchestrator | 2025-09-10 01:01:01 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:01.991950 | orchestrator | 2025-09-10 01:01:01 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:01.993716 | orchestrator | 2025-09-10 01:01:01 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:01.993755 | orchestrator | 2025-09-10 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:05.023363 | orchestrator | 2025-09-10 01:01:05 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:05.023513 | orchestrator | 2025-09-10 01:01:05 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:05.024731 | orchestrator | 2025-09-10 01:01:05 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:05.025479 | orchestrator | 2025-09-10 01:01:05 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:05.026165 | orchestrator | 2025-09-10 01:01:05 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:05.026192 | orchestrator | 2025-09-10 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:08.054357 | orchestrator | 2025-09-10 01:01:08 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:08.055650 | orchestrator | 2025-09-10 01:01:08 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:08.056224 | orchestrator | 2025-09-10 01:01:08 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:08.056925 | orchestrator | 2025-09-10 01:01:08 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:08.057556 | orchestrator | 2025-09-10 01:01:08 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:08.057578 | orchestrator | 2025-09-10 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:11.087193 | orchestrator | 2025-09-10 01:01:11 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:11.087333 | orchestrator | 2025-09-10 01:01:11 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:11.087849 | orchestrator | 2025-09-10 01:01:11 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:11.088463 | orchestrator | 2025-09-10 01:01:11 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:11.089150 | orchestrator | 2025-09-10 01:01:11 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:11.089171 | orchestrator | 2025-09-10 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:14.123275 | orchestrator | 2025-09-10 01:01:14 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:14.123404 | orchestrator | 2025-09-10 01:01:14 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:14.123895 | orchestrator | 2025-09-10 01:01:14 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:14.124532 | orchestrator | 2025-09-10 01:01:14 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:14.126146 | orchestrator | 2025-09-10 01:01:14 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:14.126180 | orchestrator | 2025-09-10 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:17.149495 | orchestrator | 2025-09-10 01:01:17 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:17.149635 | orchestrator | 2025-09-10 01:01:17 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:17.149941 | orchestrator | 2025-09-10 01:01:17 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:17.150346 | orchestrator | 2025-09-10 01:01:17 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:17.150901 | orchestrator | 2025-09-10 01:01:17 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:17.150924 | orchestrator | 2025-09-10 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:20.172917 | orchestrator | 2025-09-10 01:01:20 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:20.173643 | orchestrator | 2025-09-10 01:01:20 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:20.174603 | orchestrator | 2025-09-10 01:01:20 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:20.175129 | orchestrator | 2025-09-10 01:01:20 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:20.175681 | orchestrator | 2025-09-10 01:01:20 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:20.175716 | orchestrator | 2025-09-10 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:23.195202 | orchestrator | 2025-09-10 01:01:23 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state STARTED 2025-09-10 01:01:23.195324 | orchestrator | 2025-09-10 01:01:23 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:23.195354 | orchestrator | 2025-09-10 01:01:23 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:23.195712 | orchestrator | 2025-09-10 01:01:23 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:23.196293 | orchestrator | 2025-09-10 01:01:23 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:23.196316 | orchestrator | 2025-09-10 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:26.221345 | orchestrator | 2025-09-10 01:01:26.221441 | orchestrator | 2025-09-10 01:01:26.221453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:01:26.221462 | orchestrator | 2025-09-10 01:01:26.221470 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:01:26.221479 | orchestrator | Wednesday 10 September 2025 01:00:03 +0000 (0:00:00.381) 0:00:00.381 *** 2025-09-10 01:01:26.221487 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:01:26.221496 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:01:26.221503 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:01:26.221510 | orchestrator | ok: [testbed-manager] 2025-09-10 01:01:26.221517 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:01:26.221524 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:01:26.221531 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:01:26.221538 | orchestrator | 2025-09-10 01:01:26.221545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:01:26.221552 | orchestrator | Wednesday 10 September 2025 01:00:04 +0000 (0:00:01.096) 0:00:01.478 *** 2025-09-10 01:01:26.221558 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221566 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221573 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221581 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221588 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221595 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221602 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-10 01:01:26.221609 | orchestrator | 2025-09-10 01:01:26.221617 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-10 01:01:26.221624 | orchestrator | 2025-09-10 01:01:26.221631 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-10 01:01:26.221638 | orchestrator | Wednesday 10 September 2025 01:00:05 +0000 (0:00:01.164) 0:00:02.643 *** 2025-09-10 01:01:26.221646 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:01:26.221677 | orchestrator | 2025-09-10 01:01:26.221685 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-10 01:01:26.221692 | orchestrator | Wednesday 10 September 2025 01:00:06 +0000 (0:00:01.562) 0:00:04.205 *** 2025-09-10 01:01:26.221699 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-10 01:01:26.221706 | orchestrator | 2025-09-10 01:01:26.221712 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-10 01:01:26.221734 | orchestrator | Wednesday 10 September 2025 01:00:10 +0000 (0:00:03.676) 0:00:07.882 *** 2025-09-10 01:01:26.221762 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-10 01:01:26.221770 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-10 01:01:26.221776 | orchestrator | 2025-09-10 01:01:26.221782 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-10 01:01:26.221789 | orchestrator | Wednesday 10 September 2025 01:00:17 +0000 (0:00:07.258) 0:00:15.140 *** 2025-09-10 01:01:26.221797 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:01:26.221804 | orchestrator | 2025-09-10 01:01:26.221811 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-10 01:01:26.221929 | orchestrator | Wednesday 10 September 2025 01:00:21 +0000 (0:00:03.262) 0:00:18.402 *** 2025-09-10 01:01:26.221944 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:01:26.221951 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-10 01:01:26.221959 | orchestrator | 2025-09-10 01:01:26.221967 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-10 01:01:26.221974 | orchestrator | Wednesday 10 September 2025 01:00:25 +0000 (0:00:04.091) 0:00:22.494 *** 2025-09-10 01:01:26.221981 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:01:26.221989 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-10 01:01:26.221995 | orchestrator | 2025-09-10 01:01:26.222002 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-10 01:01:26.222009 | orchestrator | Wednesday 10 September 2025 01:00:31 +0000 (0:00:06.542) 0:00:29.036 *** 2025-09-10 01:01:26.222059 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-10 01:01:26.222069 | orchestrator | 2025-09-10 01:01:26.222076 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:01:26.222085 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222092 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222101 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222109 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222117 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222142 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222150 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222158 | orchestrator | 2025-09-10 01:01:26.222165 | orchestrator | 2025-09-10 01:01:26.222173 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:01:26.222179 | orchestrator | Wednesday 10 September 2025 01:00:36 +0000 (0:00:05.161) 0:00:34.198 *** 2025-09-10 01:01:26.222198 | orchestrator | =============================================================================== 2025-09-10 01:01:26.222206 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.26s 2025-09-10 01:01:26.222213 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.54s 2025-09-10 01:01:26.222220 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.16s 2025-09-10 01:01:26.222227 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.09s 2025-09-10 01:01:26.222234 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.68s 2025-09-10 01:01:26.222241 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.26s 2025-09-10 01:01:26.222248 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.56s 2025-09-10 01:01:26.222255 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2025-09-10 01:01:26.222262 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-09-10 01:01:26.222269 | orchestrator | 2025-09-10 01:01:26.222276 | orchestrator | 2025-09-10 01:01:26.222284 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-10 01:01:26.222291 | orchestrator | 2025-09-10 01:01:26.222299 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-10 01:01:26.222307 | orchestrator | Wednesday 10 September 2025 00:59:54 +0000 (0:00:00.300) 0:00:00.300 *** 2025-09-10 01:01:26.222315 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222322 | orchestrator | 2025-09-10 01:01:26.222330 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-10 01:01:26.222337 | orchestrator | Wednesday 10 September 2025 00:59:56 +0000 (0:00:02.176) 0:00:02.477 *** 2025-09-10 01:01:26.222345 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222352 | orchestrator | 2025-09-10 01:01:26.222360 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-10 01:01:26.222367 | orchestrator | Wednesday 10 September 2025 00:59:57 +0000 (0:00:01.024) 0:00:03.502 *** 2025-09-10 01:01:26.222374 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222382 | orchestrator | 2025-09-10 01:01:26.222398 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-10 01:01:26.222405 | orchestrator | Wednesday 10 September 2025 00:59:58 +0000 (0:00:01.118) 0:00:04.620 *** 2025-09-10 01:01:26.222411 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222417 | orchestrator | 2025-09-10 01:01:26.222424 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-10 01:01:26.222431 | orchestrator | Wednesday 10 September 2025 00:59:59 +0000 (0:00:01.377) 0:00:05.997 *** 2025-09-10 01:01:26.222438 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222446 | orchestrator | 2025-09-10 01:01:26.222453 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-10 01:01:26.222461 | orchestrator | Wednesday 10 September 2025 01:00:01 +0000 (0:00:01.517) 0:00:07.515 *** 2025-09-10 01:01:26.222468 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222476 | orchestrator | 2025-09-10 01:01:26.222484 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-10 01:01:26.222491 | orchestrator | Wednesday 10 September 2025 01:00:02 +0000 (0:00:00.840) 0:00:08.355 *** 2025-09-10 01:01:26.222498 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222505 | orchestrator | 2025-09-10 01:01:26.222513 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-10 01:01:26.222521 | orchestrator | Wednesday 10 September 2025 01:00:03 +0000 (0:00:01.266) 0:00:09.622 *** 2025-09-10 01:01:26.222528 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222536 | orchestrator | 2025-09-10 01:01:26.222543 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-10 01:01:26.222550 | orchestrator | Wednesday 10 September 2025 01:00:04 +0000 (0:00:01.125) 0:00:10.747 *** 2025-09-10 01:01:26.222566 | orchestrator | changed: [testbed-manager] 2025-09-10 01:01:26.222574 | orchestrator | 2025-09-10 01:01:26.222581 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-10 01:01:26.222589 | orchestrator | Wednesday 10 September 2025 01:01:00 +0000 (0:00:55.931) 0:01:06.679 *** 2025-09-10 01:01:26.222598 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:01:26.222606 | orchestrator | 2025-09-10 01:01:26.222613 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-10 01:01:26.222621 | orchestrator | 2025-09-10 01:01:26.222629 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-10 01:01:26.222637 | orchestrator | Wednesday 10 September 2025 01:01:00 +0000 (0:00:00.110) 0:01:06.790 *** 2025-09-10 01:01:26.222647 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:01:26.222655 | orchestrator | 2025-09-10 01:01:26.222664 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-10 01:01:26.222671 | orchestrator | 2025-09-10 01:01:26.222680 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-10 01:01:26.222688 | orchestrator | Wednesday 10 September 2025 01:01:02 +0000 (0:00:01.448) 0:01:08.239 *** 2025-09-10 01:01:26.222696 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:01:26.222704 | orchestrator | 2025-09-10 01:01:26.222712 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-10 01:01:26.222720 | orchestrator | 2025-09-10 01:01:26.222728 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-10 01:01:26.222754 | orchestrator | Wednesday 10 September 2025 01:01:13 +0000 (0:00:11.223) 0:01:19.462 *** 2025-09-10 01:01:26.222763 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:01:26.222770 | orchestrator | 2025-09-10 01:01:26.222786 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:01:26.222794 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-10 01:01:26.222802 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222810 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222817 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:01:26.222827 | orchestrator | 2025-09-10 01:01:26.222835 | orchestrator | 2025-09-10 01:01:26.222842 | orchestrator | 2025-09-10 01:01:26.222849 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:01:26.222857 | orchestrator | Wednesday 10 September 2025 01:01:24 +0000 (0:00:11.147) 0:01:30.610 *** 2025-09-10 01:01:26.222864 | orchestrator | =============================================================================== 2025-09-10 01:01:26.222871 | orchestrator | Create admin user ------------------------------------------------------ 55.93s 2025-09-10 01:01:26.222879 | orchestrator | Restart ceph manager service ------------------------------------------- 23.82s 2025-09-10 01:01:26.222887 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.18s 2025-09-10 01:01:26.222894 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.52s 2025-09-10 01:01:26.222902 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.38s 2025-09-10 01:01:26.222910 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.27s 2025-09-10 01:01:26.222917 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.13s 2025-09-10 01:01:26.222924 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.12s 2025-09-10 01:01:26.222931 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2025-09-10 01:01:26.222938 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.84s 2025-09-10 01:01:26.222953 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2025-09-10 01:01:26.222966 | orchestrator | 2025-09-10 01:01:26 | INFO  | Task f60971a9-de42-4d43-ae23-83fe38836333 is in state SUCCESS 2025-09-10 01:01:26.222974 | orchestrator | 2025-09-10 01:01:26 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:26.222981 | orchestrator | 2025-09-10 01:01:26 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:26.223320 | orchestrator | 2025-09-10 01:01:26 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:26.223826 | orchestrator | 2025-09-10 01:01:26 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:26.223853 | orchestrator | 2025-09-10 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:29.249288 | orchestrator | 2025-09-10 01:01:29 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:29.251217 | orchestrator | 2025-09-10 01:01:29 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:29.251406 | orchestrator | 2025-09-10 01:01:29 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:29.252132 | orchestrator | 2025-09-10 01:01:29 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:29.252164 | orchestrator | 2025-09-10 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:32.286172 | orchestrator | 2025-09-10 01:01:32 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:32.287036 | orchestrator | 2025-09-10 01:01:32 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:32.288244 | orchestrator | 2025-09-10 01:01:32 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:32.289309 | orchestrator | 2025-09-10 01:01:32 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:32.289407 | orchestrator | 2025-09-10 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:35.314684 | orchestrator | 2025-09-10 01:01:35 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:35.314836 | orchestrator | 2025-09-10 01:01:35 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:35.316657 | orchestrator | 2025-09-10 01:01:35 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:35.319538 | orchestrator | 2025-09-10 01:01:35 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:35.319585 | orchestrator | 2025-09-10 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:38.362116 | orchestrator | 2025-09-10 01:01:38 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:38.364516 | orchestrator | 2025-09-10 01:01:38 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:38.368186 | orchestrator | 2025-09-10 01:01:38 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:38.370185 | orchestrator | 2025-09-10 01:01:38 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:38.370215 | orchestrator | 2025-09-10 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:41.404867 | orchestrator | 2025-09-10 01:01:41 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:41.406271 | orchestrator | 2025-09-10 01:01:41 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:41.408539 | orchestrator | 2025-09-10 01:01:41 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:41.410112 | orchestrator | 2025-09-10 01:01:41 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:41.410226 | orchestrator | 2025-09-10 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:44.446805 | orchestrator | 2025-09-10 01:01:44 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:44.448073 | orchestrator | 2025-09-10 01:01:44 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:44.449557 | orchestrator | 2025-09-10 01:01:44 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:44.451084 | orchestrator | 2025-09-10 01:01:44 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:44.451108 | orchestrator | 2025-09-10 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:47.496466 | orchestrator | 2025-09-10 01:01:47 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:47.498090 | orchestrator | 2025-09-10 01:01:47 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:47.499201 | orchestrator | 2025-09-10 01:01:47 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:47.501726 | orchestrator | 2025-09-10 01:01:47 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:47.501980 | orchestrator | 2025-09-10 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:50.557575 | orchestrator | 2025-09-10 01:01:50 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:50.559578 | orchestrator | 2025-09-10 01:01:50 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:50.562913 | orchestrator | 2025-09-10 01:01:50 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:50.563862 | orchestrator | 2025-09-10 01:01:50 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:50.564124 | orchestrator | 2025-09-10 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:53.599731 | orchestrator | 2025-09-10 01:01:53 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:53.600288 | orchestrator | 2025-09-10 01:01:53 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:53.601450 | orchestrator | 2025-09-10 01:01:53 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:53.602424 | orchestrator | 2025-09-10 01:01:53 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:53.602447 | orchestrator | 2025-09-10 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:56.645875 | orchestrator | 2025-09-10 01:01:56 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:56.645981 | orchestrator | 2025-09-10 01:01:56 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:56.647792 | orchestrator | 2025-09-10 01:01:56 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:56.647874 | orchestrator | 2025-09-10 01:01:56 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:56.647901 | orchestrator | 2025-09-10 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:01:59.688526 | orchestrator | 2025-09-10 01:01:59 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:01:59.690346 | orchestrator | 2025-09-10 01:01:59 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:01:59.690945 | orchestrator | 2025-09-10 01:01:59 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:01:59.691850 | orchestrator | 2025-09-10 01:01:59 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:01:59.691989 | orchestrator | 2025-09-10 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:02.734894 | orchestrator | 2025-09-10 01:02:02 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:02.736223 | orchestrator | 2025-09-10 01:02:02 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:02.736862 | orchestrator | 2025-09-10 01:02:02 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:02.737693 | orchestrator | 2025-09-10 01:02:02 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:02.737717 | orchestrator | 2025-09-10 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:05.780838 | orchestrator | 2025-09-10 01:02:05 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:05.782392 | orchestrator | 2025-09-10 01:02:05 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:05.783721 | orchestrator | 2025-09-10 01:02:05 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:05.784966 | orchestrator | 2025-09-10 01:02:05 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:05.785099 | orchestrator | 2025-09-10 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:08.825479 | orchestrator | 2025-09-10 01:02:08 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:08.826815 | orchestrator | 2025-09-10 01:02:08 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:08.828039 | orchestrator | 2025-09-10 01:02:08 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:08.829203 | orchestrator | 2025-09-10 01:02:08 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:08.829224 | orchestrator | 2025-09-10 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:11.879208 | orchestrator | 2025-09-10 01:02:11 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:11.882100 | orchestrator | 2025-09-10 01:02:11 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:11.885661 | orchestrator | 2025-09-10 01:02:11 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:11.889827 | orchestrator | 2025-09-10 01:02:11 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:11.890411 | orchestrator | 2025-09-10 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:14.933989 | orchestrator | 2025-09-10 01:02:14 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:14.935616 | orchestrator | 2025-09-10 01:02:14 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:14.937978 | orchestrator | 2025-09-10 01:02:14 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:14.938598 | orchestrator | 2025-09-10 01:02:14 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:14.938634 | orchestrator | 2025-09-10 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:17.969571 | orchestrator | 2025-09-10 01:02:17 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:17.969928 | orchestrator | 2025-09-10 01:02:17 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:17.970919 | orchestrator | 2025-09-10 01:02:17 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:17.972362 | orchestrator | 2025-09-10 01:02:17 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:17.972385 | orchestrator | 2025-09-10 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:21.013908 | orchestrator | 2025-09-10 01:02:21 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:21.014222 | orchestrator | 2025-09-10 01:02:21 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:21.015951 | orchestrator | 2025-09-10 01:02:21 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:21.017374 | orchestrator | 2025-09-10 01:02:21 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:21.017395 | orchestrator | 2025-09-10 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:24.060397 | orchestrator | 2025-09-10 01:02:24 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:24.060954 | orchestrator | 2025-09-10 01:02:24 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:24.063590 | orchestrator | 2025-09-10 01:02:24 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:24.064193 | orchestrator | 2025-09-10 01:02:24 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:24.064222 | orchestrator | 2025-09-10 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:27.129879 | orchestrator | 2025-09-10 01:02:27 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:27.129986 | orchestrator | 2025-09-10 01:02:27 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:27.130001 | orchestrator | 2025-09-10 01:02:27 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:27.130079 | orchestrator | 2025-09-10 01:02:27 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:27.130103 | orchestrator | 2025-09-10 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:30.145240 | orchestrator | 2025-09-10 01:02:30 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:30.145985 | orchestrator | 2025-09-10 01:02:30 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:30.146639 | orchestrator | 2025-09-10 01:02:30 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:30.147274 | orchestrator | 2025-09-10 01:02:30 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:30.147382 | orchestrator | 2025-09-10 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:33.176850 | orchestrator | 2025-09-10 01:02:33 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:33.176973 | orchestrator | 2025-09-10 01:02:33 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:33.177259 | orchestrator | 2025-09-10 01:02:33 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:33.177795 | orchestrator | 2025-09-10 01:02:33 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:33.177914 | orchestrator | 2025-09-10 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:36.203533 | orchestrator | 2025-09-10 01:02:36 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:36.203665 | orchestrator | 2025-09-10 01:02:36 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:36.206200 | orchestrator | 2025-09-10 01:02:36 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:36.209537 | orchestrator | 2025-09-10 01:02:36 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:36.209578 | orchestrator | 2025-09-10 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:39.234749 | orchestrator | 2025-09-10 01:02:39 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:39.235719 | orchestrator | 2025-09-10 01:02:39 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:39.235962 | orchestrator | 2025-09-10 01:02:39 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:39.237150 | orchestrator | 2025-09-10 01:02:39 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:39.237311 | orchestrator | 2025-09-10 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:42.282990 | orchestrator | 2025-09-10 01:02:42 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:42.285175 | orchestrator | 2025-09-10 01:02:42 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:42.287568 | orchestrator | 2025-09-10 01:02:42 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:42.289817 | orchestrator | 2025-09-10 01:02:42 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:42.290235 | orchestrator | 2025-09-10 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:45.322071 | orchestrator | 2025-09-10 01:02:45 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:45.324202 | orchestrator | 2025-09-10 01:02:45 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:45.326291 | orchestrator | 2025-09-10 01:02:45 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:45.327932 | orchestrator | 2025-09-10 01:02:45 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:45.328002 | orchestrator | 2025-09-10 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:48.377247 | orchestrator | 2025-09-10 01:02:48 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:48.379166 | orchestrator | 2025-09-10 01:02:48 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state STARTED 2025-09-10 01:02:48.381966 | orchestrator | 2025-09-10 01:02:48 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:48.385146 | orchestrator | 2025-09-10 01:02:48 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:48.385171 | orchestrator | 2025-09-10 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:51.433234 | orchestrator | 2025-09-10 01:02:51 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:51.434699 | orchestrator | 2025-09-10 01:02:51 | INFO  | Task 8bd23522-7f4e-4321-8e6b-a12ff01d28c7 is in state SUCCESS 2025-09-10 01:02:51.439834 | orchestrator | 2025-09-10 01:02:51.439872 | orchestrator | 2025-09-10 01:02:51.439885 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:02:51.439896 | orchestrator | 2025-09-10 01:02:51.439908 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:02:51.439919 | orchestrator | Wednesday 10 September 2025 01:00:02 +0000 (0:00:00.219) 0:00:00.219 *** 2025-09-10 01:02:51.439930 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:02:51.439942 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:02:51.439953 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:02:51.439964 | orchestrator | 2025-09-10 01:02:51.439975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:02:51.439986 | orchestrator | Wednesday 10 September 2025 01:00:02 +0000 (0:00:00.344) 0:00:00.563 *** 2025-09-10 01:02:51.439997 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-10 01:02:51.440008 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-10 01:02:51.440028 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-10 01:02:51.440097 | orchestrator | 2025-09-10 01:02:51.440120 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-10 01:02:51.440142 | orchestrator | 2025-09-10 01:02:51.440155 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-10 01:02:51.440166 | orchestrator | Wednesday 10 September 2025 01:00:03 +0000 (0:00:00.563) 0:00:01.126 *** 2025-09-10 01:02:51.440176 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:02:51.440188 | orchestrator | 2025-09-10 01:02:51.440199 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-10 01:02:51.440210 | orchestrator | Wednesday 10 September 2025 01:00:04 +0000 (0:00:00.704) 0:00:01.830 *** 2025-09-10 01:02:51.440221 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-10 01:02:51.440231 | orchestrator | 2025-09-10 01:02:51.440242 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-10 01:02:51.440253 | orchestrator | Wednesday 10 September 2025 01:00:08 +0000 (0:00:04.093) 0:00:05.924 *** 2025-09-10 01:02:51.440264 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-10 01:02:51.440276 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-10 01:02:51.440286 | orchestrator | 2025-09-10 01:02:51.440297 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-10 01:02:51.440308 | orchestrator | Wednesday 10 September 2025 01:00:14 +0000 (0:00:06.552) 0:00:12.477 *** 2025-09-10 01:02:51.440319 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-10 01:02:51.440330 | orchestrator | 2025-09-10 01:02:51.440340 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-10 01:02:51.440351 | orchestrator | Wednesday 10 September 2025 01:00:18 +0000 (0:00:03.623) 0:00:16.101 *** 2025-09-10 01:02:51.440363 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:02:51.440375 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-10 01:02:51.440385 | orchestrator | 2025-09-10 01:02:51.440396 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-10 01:02:51.440407 | orchestrator | Wednesday 10 September 2025 01:00:22 +0000 (0:00:03.907) 0:00:20.008 *** 2025-09-10 01:02:51.440418 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:02:51.440429 | orchestrator | 2025-09-10 01:02:51.440440 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-10 01:02:51.440453 | orchestrator | Wednesday 10 September 2025 01:00:25 +0000 (0:00:03.454) 0:00:23.463 *** 2025-09-10 01:02:51.440486 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-10 01:02:51.440519 | orchestrator | 2025-09-10 01:02:51.440532 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-10 01:02:51.440545 | orchestrator | Wednesday 10 September 2025 01:00:30 +0000 (0:00:04.180) 0:00:27.644 *** 2025-09-10 01:02:51.440587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.440607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.440623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.440646 | orchestrator | 2025-09-10 01:02:51.440660 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-10 01:02:51.440673 | orchestrator | Wednesday 10 September 2025 01:00:35 +0000 (0:00:05.290) 0:00:32.935 *** 2025-09-10 01:02:51.440690 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:02:51.440704 | orchestrator | 2025-09-10 01:02:51.440723 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-10 01:02:51.440737 | orchestrator | Wednesday 10 September 2025 01:00:35 +0000 (0:00:00.578) 0:00:33.513 *** 2025-09-10 01:02:51.440750 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:02:51.440763 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:02:51.440777 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.440789 | orchestrator | 2025-09-10 01:02:51.440802 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-10 01:02:51.440813 | orchestrator | Wednesday 10 September 2025 01:00:39 +0000 (0:00:03.316) 0:00:36.830 *** 2025-09-10 01:02:51.440824 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:02:51.440835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:02:51.440845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:02:51.440856 | orchestrator | 2025-09-10 01:02:51.440867 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-10 01:02:51.440878 | orchestrator | Wednesday 10 September 2025 01:00:40 +0000 (0:00:01.686) 0:00:38.517 *** 2025-09-10 01:02:51.440888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:02:51.440899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:02:51.440910 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:02:51.440921 | orchestrator | 2025-09-10 01:02:51.440932 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-10 01:02:51.440943 | orchestrator | Wednesday 10 September 2025 01:00:42 +0000 (0:00:01.304) 0:00:39.821 *** 2025-09-10 01:02:51.440953 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:02:51.440964 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:02:51.440982 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:02:51.440993 | orchestrator | 2025-09-10 01:02:51.441003 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-10 01:02:51.441014 | orchestrator | Wednesday 10 September 2025 01:00:43 +0000 (0:00:00.849) 0:00:40.671 *** 2025-09-10 01:02:51.441025 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441036 | orchestrator | 2025-09-10 01:02:51.441047 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-10 01:02:51.441057 | orchestrator | Wednesday 10 September 2025 01:00:43 +0000 (0:00:00.380) 0:00:41.052 *** 2025-09-10 01:02:51.441068 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441079 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441090 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441100 | orchestrator | 2025-09-10 01:02:51.441111 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-10 01:02:51.441122 | orchestrator | Wednesday 10 September 2025 01:00:43 +0000 (0:00:00.274) 0:00:41.326 *** 2025-09-10 01:02:51.441133 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:02:51.441144 | orchestrator | 2025-09-10 01:02:51.441155 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-10 01:02:51.441166 | orchestrator | Wednesday 10 September 2025 01:00:44 +0000 (0:00:00.531) 0:00:41.857 *** 2025-09-10 01:02:51.441188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.441202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.441222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.441234 | orchestrator | 2025-09-10 01:02:51.441244 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-10 01:02:51.441255 | orchestrator | Wednesday 10 September 2025 01:00:49 +0000 (0:00:04.933) 0:00:46.791 *** 2025-09-10 01:02:51.441280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 01:02:51.441299 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 01:02:51.441323 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 01:02:51.441365 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441376 | orchestrator | 2025-09-10 01:02:51.441387 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-10 01:02:51.441398 | orchestrator | Wednesday 10 September 2025 01:00:53 +0000 (0:00:03.964) 0:00:50.755 *** 2025-09-10 01:02:51.441409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 01:02:51.441421 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 01:02:51.441457 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-10 01:02:51.441486 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441513 | orchestrator | 2025-09-10 01:02:51.441524 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-10 01:02:51.441535 | orchestrator | Wednesday 10 September 2025 01:00:56 +0000 (0:00:03.604) 0:00:54.359 *** 2025-09-10 01:02:51.441546 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441557 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441567 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441578 | orchestrator | 2025-09-10 01:02:51.441589 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-10 01:02:51.441600 | orchestrator | Wednesday 10 September 2025 01:01:01 +0000 (0:00:04.541) 0:00:58.901 *** 2025-09-10 01:02:51.441627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.441647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.441660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.441673 | orchestrator | 2025-09-10 01:02:51.441684 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-10 01:02:51.441695 | orchestrator | Wednesday 10 September 2025 01:01:06 +0000 (0:00:04.920) 0:01:03.821 *** 2025-09-10 01:02:51.441706 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.441717 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:02:51.441728 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:02:51.441739 | orchestrator | 2025-09-10 01:02:51.441750 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-10 01:02:51.441761 | orchestrator | Wednesday 10 September 2025 01:01:14 +0000 (0:00:08.620) 0:01:12.442 *** 2025-09-10 01:02:51.441779 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441790 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441801 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441812 | orchestrator | 2025-09-10 01:02:51.441828 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-10 01:02:51.441844 | orchestrator | Wednesday 10 September 2025 01:01:20 +0000 (0:00:05.513) 0:01:17.955 *** 2025-09-10 01:02:51.441856 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441867 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441877 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441888 | orchestrator | 2025-09-10 01:02:51.441899 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-10 01:02:51.441910 | orchestrator | Wednesday 10 September 2025 01:01:25 +0000 (0:00:04.884) 0:01:22.840 *** 2025-09-10 01:02:51.441921 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441932 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.441943 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.441953 | orchestrator | 2025-09-10 01:02:51.441964 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-10 01:02:51.441975 | orchestrator | Wednesday 10 September 2025 01:01:28 +0000 (0:00:03.685) 0:01:26.525 *** 2025-09-10 01:02:51.441986 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.441997 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.442008 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.442063 | orchestrator | 2025-09-10 01:02:51.442075 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-10 01:02:51.442086 | orchestrator | Wednesday 10 September 2025 01:01:32 +0000 (0:00:03.651) 0:01:30.176 *** 2025-09-10 01:02:51.442097 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.442108 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.442119 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.442130 | orchestrator | 2025-09-10 01:02:51.442140 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-10 01:02:51.442151 | orchestrator | Wednesday 10 September 2025 01:01:32 +0000 (0:00:00.289) 0:01:30.466 *** 2025-09-10 01:02:51.442162 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-10 01:02:51.442173 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.442184 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-10 01:02:51.442195 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.442205 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-10 01:02:51.442216 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.442227 | orchestrator | 2025-09-10 01:02:51.442238 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-10 01:02:51.442249 | orchestrator | Wednesday 10 September 2025 01:01:35 +0000 (0:00:03.027) 0:01:33.493 *** 2025-09-10 01:02:51.442261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.442295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.442309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-10 01:02:51.442338 | orchestrator | 2025-09-10 01:02:51.442349 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-10 01:02:51.442360 | orchestrator | Wednesday 10 September 2025 01:01:40 +0000 (0:00:04.301) 0:01:37.794 *** 2025-09-10 01:02:51.442370 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:02:51.442381 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:02:51.442392 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:02:51.442402 | orchestrator | 2025-09-10 01:02:51.442413 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-10 01:02:51.442424 | orchestrator | Wednesday 10 September 2025 01:01:40 +0000 (0:00:00.364) 0:01:38.159 *** 2025-09-10 01:02:51.442434 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.442445 | orchestrator | 2025-09-10 01:02:51.442455 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-10 01:02:51.442466 | orchestrator | Wednesday 10 September 2025 01:01:42 +0000 (0:00:02.063) 0:01:40.223 *** 2025-09-10 01:02:51.442477 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.442488 | orchestrator | 2025-09-10 01:02:51.442556 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-10 01:02:51.442569 | orchestrator | Wednesday 10 September 2025 01:01:44 +0000 (0:00:02.108) 0:01:42.332 *** 2025-09-10 01:02:51.442580 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.442590 | orchestrator | 2025-09-10 01:02:51.442605 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-10 01:02:51.442624 | orchestrator | Wednesday 10 September 2025 01:01:46 +0000 (0:00:02.046) 0:01:44.379 *** 2025-09-10 01:02:51.442701 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.442725 | orchestrator | 2025-09-10 01:02:51.442746 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-10 01:02:51.442810 | orchestrator | Wednesday 10 September 2025 01:02:13 +0000 (0:00:26.598) 0:02:10.977 *** 2025-09-10 01:02:51.442839 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.442852 | orchestrator | 2025-09-10 01:02:51.442872 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-10 01:02:51.442884 | orchestrator | Wednesday 10 September 2025 01:02:15 +0000 (0:00:02.179) 0:02:13.157 *** 2025-09-10 01:02:51.442894 | orchestrator | 2025-09-10 01:02:51.442905 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-10 01:02:51.442916 | orchestrator | Wednesday 10 September 2025 01:02:15 +0000 (0:00:00.065) 0:02:13.222 *** 2025-09-10 01:02:51.442926 | orchestrator | 2025-09-10 01:02:51.442937 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-10 01:02:51.442948 | orchestrator | Wednesday 10 September 2025 01:02:15 +0000 (0:00:00.074) 0:02:13.297 *** 2025-09-10 01:02:51.442958 | orchestrator | 2025-09-10 01:02:51.442968 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-10 01:02:51.442978 | orchestrator | Wednesday 10 September 2025 01:02:15 +0000 (0:00:00.075) 0:02:13.372 *** 2025-09-10 01:02:51.442987 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:02:51.442997 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:02:51.443006 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:02:51.443016 | orchestrator | 2025-09-10 01:02:51.443025 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:02:51.443036 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-10 01:02:51.443046 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-10 01:02:51.443056 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-10 01:02:51.443074 | orchestrator | 2025-09-10 01:02:51.443084 | orchestrator | 2025-09-10 01:02:51.443094 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:02:51.443103 | orchestrator | Wednesday 10 September 2025 01:02:50 +0000 (0:00:34.253) 0:02:47.625 *** 2025-09-10 01:02:51.443113 | orchestrator | =============================================================================== 2025-09-10 01:02:51.443122 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.25s 2025-09-10 01:02:51.443132 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.60s 2025-09-10 01:02:51.443141 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.62s 2025-09-10 01:02:51.443150 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.55s 2025-09-10 01:02:51.443160 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.51s 2025-09-10 01:02:51.443169 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.29s 2025-09-10 01:02:51.443178 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.93s 2025-09-10 01:02:51.443188 | orchestrator | glance : Copying over config.json files for services -------------------- 4.92s 2025-09-10 01:02:51.443197 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.88s 2025-09-10 01:02:51.443206 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.54s 2025-09-10 01:02:51.443216 | orchestrator | glance : Check glance containers ---------------------------------------- 4.30s 2025-09-10 01:02:51.443225 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.18s 2025-09-10 01:02:51.443235 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.09s 2025-09-10 01:02:51.443244 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.96s 2025-09-10 01:02:51.443254 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.91s 2025-09-10 01:02:51.443263 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.69s 2025-09-10 01:02:51.443272 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.65s 2025-09-10 01:02:51.443282 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.62s 2025-09-10 01:02:51.443291 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.60s 2025-09-10 01:02:51.443300 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.45s 2025-09-10 01:02:51.443310 | orchestrator | 2025-09-10 01:02:51 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:51.443319 | orchestrator | 2025-09-10 01:02:51 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:02:51.443421 | orchestrator | 2025-09-10 01:02:51 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:51.443434 | orchestrator | 2025-09-10 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:54.482977 | orchestrator | 2025-09-10 01:02:54 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:54.484802 | orchestrator | 2025-09-10 01:02:54 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:54.487196 | orchestrator | 2025-09-10 01:02:54 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:02:54.488971 | orchestrator | 2025-09-10 01:02:54 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:54.489365 | orchestrator | 2025-09-10 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:02:57.582276 | orchestrator | 2025-09-10 01:02:57 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:02:57.584021 | orchestrator | 2025-09-10 01:02:57 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:02:57.584932 | orchestrator | 2025-09-10 01:02:57 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:02:57.586520 | orchestrator | 2025-09-10 01:02:57 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:02:57.586600 | orchestrator | 2025-09-10 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:00.625364 | orchestrator | 2025-09-10 01:03:00 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:00.625454 | orchestrator | 2025-09-10 01:03:00 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:00.626599 | orchestrator | 2025-09-10 01:03:00 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:00.627706 | orchestrator | 2025-09-10 01:03:00 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:03:00.628905 | orchestrator | 2025-09-10 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:03.672140 | orchestrator | 2025-09-10 01:03:03 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:03.673751 | orchestrator | 2025-09-10 01:03:03 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:03.675615 | orchestrator | 2025-09-10 01:03:03 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:03.676947 | orchestrator | 2025-09-10 01:03:03 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:03:03.677171 | orchestrator | 2025-09-10 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:06.730777 | orchestrator | 2025-09-10 01:03:06 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:06.732143 | orchestrator | 2025-09-10 01:03:06 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:06.732952 | orchestrator | 2025-09-10 01:03:06 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:06.734870 | orchestrator | 2025-09-10 01:03:06 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:03:06.734898 | orchestrator | 2025-09-10 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:09.782999 | orchestrator | 2025-09-10 01:03:09 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:09.785067 | orchestrator | 2025-09-10 01:03:09 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:09.787028 | orchestrator | 2025-09-10 01:03:09 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:09.789564 | orchestrator | 2025-09-10 01:03:09 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:03:09.789601 | orchestrator | 2025-09-10 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:12.838933 | orchestrator | 2025-09-10 01:03:12 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:12.841765 | orchestrator | 2025-09-10 01:03:12 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:12.842347 | orchestrator | 2025-09-10 01:03:12 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:12.843852 | orchestrator | 2025-09-10 01:03:12 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:03:12.843875 | orchestrator | 2025-09-10 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:15.900681 | orchestrator | 2025-09-10 01:03:15 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:15.902385 | orchestrator | 2025-09-10 01:03:15 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:15.904077 | orchestrator | 2025-09-10 01:03:15 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:15.905247 | orchestrator | 2025-09-10 01:03:15 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state STARTED 2025-09-10 01:03:15.905287 | orchestrator | 2025-09-10 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:18.956573 | orchestrator | 2025-09-10 01:03:18 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:18.957836 | orchestrator | 2025-09-10 01:03:18 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:18.959492 | orchestrator | 2025-09-10 01:03:18 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:18.963853 | orchestrator | 2025-09-10 01:03:18 | INFO  | Task 0ae2186c-6168-415a-9a09-6eb0ba8345d6 is in state SUCCESS 2025-09-10 01:03:18.966732 | orchestrator | 2025-09-10 01:03:18.966863 | orchestrator | 2025-09-10 01:03:18.966880 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:03:18.966892 | orchestrator | 2025-09-10 01:03:18.966903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:03:18.966914 | orchestrator | Wednesday 10 September 2025 00:59:53 +0000 (0:00:00.289) 0:00:00.289 *** 2025-09-10 01:03:18.966951 | orchestrator | ok: [testbed-manager] 2025-09-10 01:03:18.966965 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:03:18.966976 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:03:18.966987 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:03:18.966999 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:03:18.967010 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:03:18.967021 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:03:18.967031 | orchestrator | 2025-09-10 01:03:18.967042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:03:18.967053 | orchestrator | Wednesday 10 September 2025 00:59:54 +0000 (0:00:00.957) 0:00:01.246 *** 2025-09-10 01:03:18.967076 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967088 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967135 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967147 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967158 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967169 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967205 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-10 01:03:18.967216 | orchestrator | 2025-09-10 01:03:18.967227 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-10 01:03:18.967238 | orchestrator | 2025-09-10 01:03:18.967248 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-10 01:03:18.967454 | orchestrator | Wednesday 10 September 2025 00:59:55 +0000 (0:00:00.991) 0:00:02.238 *** 2025-09-10 01:03:18.967474 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:03:18.967623 | orchestrator | 2025-09-10 01:03:18.967637 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-10 01:03:18.967651 | orchestrator | Wednesday 10 September 2025 00:59:57 +0000 (0:00:02.086) 0:00:04.325 *** 2025-09-10 01:03:18.967667 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 01:03:18.967704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967741 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.967792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.967803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.967834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.967846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.967862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.967892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.967903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968049 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 01:03:18.968070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968114 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968227 | orchestrator | 2025-09-10 01:03:18.968239 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-10 01:03:18.968250 | orchestrator | Wednesday 10 September 2025 01:00:02 +0000 (0:00:04.336) 0:00:08.662 *** 2025-09-10 01:03:18.968261 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:03:18.968272 | orchestrator | 2025-09-10 01:03:18.968283 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-10 01:03:18.968294 | orchestrator | Wednesday 10 September 2025 01:00:04 +0000 (0:00:01.882) 0:00:10.545 *** 2025-09-10 01:03:18.968306 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 01:03:18.968317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968402 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.968414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 01:03:18.968631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968720 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.968731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.968769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.969752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.969807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.969821 | orchestrator | 2025-09-10 01:03:18.969832 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-10 01:03:18.969844 | orchestrator | Wednesday 10 September 2025 01:00:10 +0000 (0:00:06.213) 0:00:16.758 *** 2025-09-10 01:03:18.969856 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-10 01:03:18.969868 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.969879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.969898 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-10 01:03:18.969920 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.969939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.969951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.969962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.969974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.969985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.969996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970091 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.970110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970122 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.970133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970155 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.970167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970200 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.970211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970287 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.970298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970333 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.970352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970406 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.970419 | orchestrator | 2025-09-10 01:03:18.970431 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-10 01:03:18.970443 | orchestrator | Wednesday 10 September 2025 01:00:11 +0000 (0:00:01.572) 0:00:18.330 *** 2025-09-10 01:03:18.970456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-10 01:03:18.970469 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970482 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-10 01:03:18.970553 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970640 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.970659 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.970672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970759 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.970778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-10 01:03:18.970922 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.970950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.970985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.970996 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.971007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.971026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.971037 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.971048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-10 01:03:18.971064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.971082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-10 01:03:18.971094 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.971104 | orchestrator | 2025-09-10 01:03:18.971116 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-10 01:03:18.971127 | orchestrator | Wednesday 10 September 2025 01:00:13 +0000 (0:00:01.884) 0:00:20.215 *** 2025-09-10 01:03:18.971138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 01:03:18.971149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971233 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.971244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971328 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971422 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 01:03:18.971435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.971475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.971552 | orchestrator | 2025-09-10 01:03:18.971563 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-10 01:03:18.971574 | orchestrator | Wednesday 10 September 2025 01:00:19 +0000 (0:00:06.161) 0:00:26.376 *** 2025-09-10 01:03:18.971585 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 01:03:18.971596 | orchestrator | 2025-09-10 01:03:18.971607 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-10 01:03:18.971623 | orchestrator | Wednesday 10 September 2025 01:00:20 +0000 (0:00:01.056) 0:00:27.433 *** 2025-09-10 01:03:18.971635 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971657 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971668 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971679 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971691 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971707 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.971723 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971735 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971752 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094357, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971763 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971774 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971785 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971800 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971828 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971848 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971887 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971907 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971926 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.971974 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972016 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972042 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094503, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5860012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.972060 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972080 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972099 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972118 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972283 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972305 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972316 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972327 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972344 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972369 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972381 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972392 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094348, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5575118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.972425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972441 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972464 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972487 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972522 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972537 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972548 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972571 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972588 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972611 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972622 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972633 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972644 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972669 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094490, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.972686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972698 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972709 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972720 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972731 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972742 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972764 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972781 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972793 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972826 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972843 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972859 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972876 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972887 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972909 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972921 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972938 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094339, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.556266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.972953 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972969 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972981 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.972992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973003 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973014 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973030 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973046 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973063 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973075 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973086 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973097 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973108 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094358, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5605319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973126 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973142 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973159 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973174 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973212 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973259 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.973278 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973303 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973334 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973355 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094488, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5820518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973374 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973394 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973425 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973441 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.973452 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973472 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973490 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973554 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.973567 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973579 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973598 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973609 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973620 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973631 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.973647 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973658 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.973676 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094362, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5610993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973688 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973699 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973716 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-10 01:03:18.973727 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.973738 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094354, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.559903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973749 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094500, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5851533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973763 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094333, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5548153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094514, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5880196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094494, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5838313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973799 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094342, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5568628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094334, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5552676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973824 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094368, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.580934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973834 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094364, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.562172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973848 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094512, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.587255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-10 01:03:18.973858 | orchestrator | 2025-09-10 01:03:18.973868 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-10 01:03:18.973878 | orchestrator | Wednesday 10 September 2025 01:00:47 +0000 (0:00:26.371) 0:00:53.804 *** 2025-09-10 01:03:18.973887 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 01:03:18.973897 | orchestrator | 2025-09-10 01:03:18.973911 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-10 01:03:18.973921 | orchestrator | Wednesday 10 September 2025 01:00:47 +0000 (0:00:00.598) 0:00:54.403 *** 2025-09-10 01:03:18.973931 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.973941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.973950 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.973960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.973969 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.973979 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.973989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974003 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.974047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974060 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.974069 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.974079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974092 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.974113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974136 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.974151 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.974167 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974183 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.974199 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974215 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.974231 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.974247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974263 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.974274 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974283 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.974292 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.974302 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974311 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.974320 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974330 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.974342 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.974363 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974386 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-10 01:03:18.974402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-10 01:03:18.974417 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-10 01:03:18.974440 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-10 01:03:18.974461 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-10 01:03:18.974486 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 01:03:18.974523 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:03:18.974539 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-10 01:03:18.974554 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-10 01:03:18.974569 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-10 01:03:18.974593 | orchestrator | 2025-09-10 01:03:18.974613 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-10 01:03:18.974630 | orchestrator | Wednesday 10 September 2025 01:00:49 +0000 (0:00:01.923) 0:00:56.326 *** 2025-09-10 01:03:18.974646 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-10 01:03:18.974660 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-10 01:03:18.974670 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.974680 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.974689 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-10 01:03:18.974699 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.974708 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-10 01:03:18.974766 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.974776 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-10 01:03:18.974793 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.974802 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-10 01:03:18.974812 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.974821 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-10 01:03:18.974830 | orchestrator | 2025-09-10 01:03:18.974840 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-10 01:03:18.974849 | orchestrator | Wednesday 10 September 2025 01:01:07 +0000 (0:00:17.963) 0:01:14.291 *** 2025-09-10 01:03:18.974859 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-10 01:03:18.974876 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.974886 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-10 01:03:18.974896 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.974905 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-10 01:03:18.974915 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.974924 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-10 01:03:18.974934 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.974943 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-10 01:03:18.974953 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.974962 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-10 01:03:18.974972 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.974981 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-10 01:03:18.974991 | orchestrator | 2025-09-10 01:03:18.975000 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-10 01:03:18.975009 | orchestrator | Wednesday 10 September 2025 01:01:13 +0000 (0:00:06.002) 0:01:20.294 *** 2025-09-10 01:03:18.975020 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-10 01:03:18.975029 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-10 01:03:18.975039 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.975049 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.975058 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-10 01:03:18.975068 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.975078 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-10 01:03:18.975087 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.975096 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-10 01:03:18.975106 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.975115 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-10 01:03:18.975125 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-10 01:03:18.975141 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.975151 | orchestrator | 2025-09-10 01:03:18.975160 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-10 01:03:18.975170 | orchestrator | Wednesday 10 September 2025 01:01:15 +0000 (0:00:02.170) 0:01:22.464 *** 2025-09-10 01:03:18.975179 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 01:03:18.975188 | orchestrator | 2025-09-10 01:03:18.975198 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-10 01:03:18.975207 | orchestrator | Wednesday 10 September 2025 01:01:17 +0000 (0:00:01.307) 0:01:23.771 *** 2025-09-10 01:03:18.975216 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.975226 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.975235 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.975245 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.975254 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.975263 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.975272 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.975282 | orchestrator | 2025-09-10 01:03:18.975291 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-10 01:03:18.975300 | orchestrator | Wednesday 10 September 2025 01:01:18 +0000 (0:00:01.039) 0:01:24.811 *** 2025-09-10 01:03:18.975310 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.975319 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.975329 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.975338 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.975347 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:18.975356 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:18.975366 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:18.975375 | orchestrator | 2025-09-10 01:03:18.975385 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-10 01:03:18.975394 | orchestrator | Wednesday 10 September 2025 01:01:20 +0000 (0:00:02.479) 0:01:27.290 *** 2025-09-10 01:03:18.975408 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975418 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.975427 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975437 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.975446 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975456 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975465 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.975475 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.975489 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975615 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.975645 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975655 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.975665 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-10 01:03:18.975674 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.975684 | orchestrator | 2025-09-10 01:03:18.975693 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-10 01:03:18.975703 | orchestrator | Wednesday 10 September 2025 01:01:23 +0000 (0:00:02.450) 0:01:29.740 *** 2025-09-10 01:03:18.975712 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-10 01:03:18.975722 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.975732 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-10 01:03:18.975741 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.975771 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-10 01:03:18.975781 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.975790 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-10 01:03:18.975800 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.975809 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-10 01:03:18.975818 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.975828 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-10 01:03:18.975837 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.975846 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-10 01:03:18.975856 | orchestrator | 2025-09-10 01:03:18.975865 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-10 01:03:18.975874 | orchestrator | Wednesday 10 September 2025 01:01:25 +0000 (0:00:01.890) 0:01:31.631 *** 2025-09-10 01:03:18.975884 | orchestrator | [WARNING]: Skipped 2025-09-10 01:03:18.975893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-10 01:03:18.975902 | orchestrator | due to this access issue: 2025-09-10 01:03:18.975912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-10 01:03:18.975921 | orchestrator | not a directory 2025-09-10 01:03:18.975930 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-10 01:03:18.975940 | orchestrator | 2025-09-10 01:03:18.975949 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-10 01:03:18.975958 | orchestrator | Wednesday 10 September 2025 01:01:26 +0000 (0:00:01.735) 0:01:33.367 *** 2025-09-10 01:03:18.975968 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.975977 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.975987 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.975996 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.976005 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.976015 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.976023 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.976030 | orchestrator | 2025-09-10 01:03:18.976038 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-10 01:03:18.976046 | orchestrator | Wednesday 10 September 2025 01:01:27 +0000 (0:00:00.956) 0:01:34.324 *** 2025-09-10 01:03:18.976054 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.976061 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:18.976069 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:18.976076 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:18.976084 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:18.976091 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:18.976099 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:18.976106 | orchestrator | 2025-09-10 01:03:18.976114 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-10 01:03:18.976122 | orchestrator | Wednesday 10 September 2025 01:01:28 +0000 (0:00:00.706) 0:01:35.030 *** 2025-09-10 01:03:18.976131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-10 01:03:18.976155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976179 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-10 01:03:18.976275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976309 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-10 01:03:18.976326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976379 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-10 01:03:18.976433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-10 01:03:18.976457 | orchestrator | 2025-09-10 01:03:18.976465 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-10 01:03:18.976473 | orchestrator | Wednesday 10 September 2025 01:01:32 +0000 (0:00:04.301) 0:01:39.332 *** 2025-09-10 01:03:18.976480 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-10 01:03:18.976488 | orchestrator | skipping: [testbed-manager] 2025-09-10 01:03:18.976496 | orchestrator | 2025-09-10 01:03:18.976521 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976533 | orchestrator | Wednesday 10 September 2025 01:01:33 +0000 (0:00:01.121) 0:01:40.453 *** 2025-09-10 01:03:18.976541 | orchestrator | 2025-09-10 01:03:18.976549 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976557 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.137) 0:01:40.590 *** 2025-09-10 01:03:18.976564 | orchestrator | 2025-09-10 01:03:18.976572 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976580 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.126) 0:01:40.717 *** 2025-09-10 01:03:18.976587 | orchestrator | 2025-09-10 01:03:18.976595 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976603 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.129) 0:01:40.846 *** 2025-09-10 01:03:18.976610 | orchestrator | 2025-09-10 01:03:18.976618 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976626 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.290) 0:01:41.137 *** 2025-09-10 01:03:18.976633 | orchestrator | 2025-09-10 01:03:18.976641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976652 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.063) 0:01:41.200 *** 2025-09-10 01:03:18.976660 | orchestrator | 2025-09-10 01:03:18.976668 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-10 01:03:18.976676 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.064) 0:01:41.265 *** 2025-09-10 01:03:18.976683 | orchestrator | 2025-09-10 01:03:18.976691 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-10 01:03:18.976699 | orchestrator | Wednesday 10 September 2025 01:01:34 +0000 (0:00:00.084) 0:01:41.349 *** 2025-09-10 01:03:18.976707 | orchestrator | changed: [testbed-manager] 2025-09-10 01:03:18.976714 | orchestrator | 2025-09-10 01:03:18.976722 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-10 01:03:18.976734 | orchestrator | Wednesday 10 September 2025 01:01:58 +0000 (0:00:23.928) 0:02:05.277 *** 2025-09-10 01:03:18.976742 | orchestrator | changed: [testbed-manager] 2025-09-10 01:03:18.976749 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:03:18.976757 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:03:18.976765 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:03:18.976773 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:18.976780 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:18.976788 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:18.976796 | orchestrator | 2025-09-10 01:03:18.976803 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-10 01:03:18.976811 | orchestrator | Wednesday 10 September 2025 01:02:11 +0000 (0:00:13.054) 0:02:18.332 *** 2025-09-10 01:03:18.976819 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:18.976827 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:18.976834 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:18.976842 | orchestrator | 2025-09-10 01:03:18.976849 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-10 01:03:18.976857 | orchestrator | Wednesday 10 September 2025 01:02:22 +0000 (0:00:10.567) 0:02:28.899 *** 2025-09-10 01:03:18.976865 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:18.976873 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:18.976880 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:18.976888 | orchestrator | 2025-09-10 01:03:18.976896 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-10 01:03:18.976903 | orchestrator | Wednesday 10 September 2025 01:02:28 +0000 (0:00:06.462) 0:02:35.362 *** 2025-09-10 01:03:18.976911 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:18.976919 | orchestrator | changed: [testbed-manager] 2025-09-10 01:03:18.976926 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:18.976934 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:18.976946 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:03:18.976954 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:03:18.976962 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:03:18.976969 | orchestrator | 2025-09-10 01:03:18.976977 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-10 01:03:18.976985 | orchestrator | Wednesday 10 September 2025 01:02:47 +0000 (0:00:18.225) 0:02:53.587 *** 2025-09-10 01:03:18.976993 | orchestrator | changed: [testbed-manager] 2025-09-10 01:03:18.977000 | orchestrator | 2025-09-10 01:03:18.977008 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-10 01:03:18.977016 | orchestrator | Wednesday 10 September 2025 01:02:54 +0000 (0:00:07.681) 0:03:01.269 *** 2025-09-10 01:03:18.977024 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:18.977031 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:18.977039 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:18.977046 | orchestrator | 2025-09-10 01:03:18.977054 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-10 01:03:18.977062 | orchestrator | Wednesday 10 September 2025 01:03:00 +0000 (0:00:05.720) 0:03:06.989 *** 2025-09-10 01:03:18.977070 | orchestrator | changed: [testbed-manager] 2025-09-10 01:03:18.977077 | orchestrator | 2025-09-10 01:03:18.977085 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-10 01:03:18.977093 | orchestrator | Wednesday 10 September 2025 01:03:05 +0000 (0:00:05.327) 0:03:12.317 *** 2025-09-10 01:03:18.977101 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:03:18.977109 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:03:18.977116 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:03:18.977124 | orchestrator | 2025-09-10 01:03:18.977132 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:03:18.977140 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-10 01:03:18.977148 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-10 01:03:18.977156 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-10 01:03:18.977164 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-10 01:03:18.977172 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-10 01:03:18.977180 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-10 01:03:18.977187 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-10 01:03:18.977195 | orchestrator | 2025-09-10 01:03:18.977203 | orchestrator | 2025-09-10 01:03:18.977211 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:03:18.977221 | orchestrator | Wednesday 10 September 2025 01:03:17 +0000 (0:00:11.707) 0:03:24.024 *** 2025-09-10 01:03:18.977229 | orchestrator | =============================================================================== 2025-09-10 01:03:18.977237 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.37s 2025-09-10 01:03:18.977245 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.93s 2025-09-10 01:03:18.977253 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.23s 2025-09-10 01:03:18.977260 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.97s 2025-09-10 01:03:18.977268 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.05s 2025-09-10 01:03:18.977284 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.71s 2025-09-10 01:03:18.977292 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.57s 2025-09-10 01:03:18.977299 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.68s 2025-09-10 01:03:18.977307 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.46s 2025-09-10 01:03:18.977315 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.21s 2025-09-10 01:03:18.977323 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.16s 2025-09-10 01:03:18.977330 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 6.00s 2025-09-10 01:03:18.977338 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.72s 2025-09-10 01:03:18.977345 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.33s 2025-09-10 01:03:18.977353 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.34s 2025-09-10 01:03:18.977361 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.30s 2025-09-10 01:03:18.977368 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2025-09-10 01:03:18.977376 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.45s 2025-09-10 01:03:18.977384 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.17s 2025-09-10 01:03:18.977391 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.09s 2025-09-10 01:03:18.977399 | orchestrator | 2025-09-10 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:22.025668 | orchestrator | 2025-09-10 01:03:22 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:22.027835 | orchestrator | 2025-09-10 01:03:22 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:22.029164 | orchestrator | 2025-09-10 01:03:22 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:22.030750 | orchestrator | 2025-09-10 01:03:22 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:22.031065 | orchestrator | 2025-09-10 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:25.073962 | orchestrator | 2025-09-10 01:03:25 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:25.075928 | orchestrator | 2025-09-10 01:03:25 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:25.077988 | orchestrator | 2025-09-10 01:03:25 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:25.080774 | orchestrator | 2025-09-10 01:03:25 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:25.080797 | orchestrator | 2025-09-10 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:28.119566 | orchestrator | 2025-09-10 01:03:28 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:28.121790 | orchestrator | 2025-09-10 01:03:28 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:28.124020 | orchestrator | 2025-09-10 01:03:28 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:28.126721 | orchestrator | 2025-09-10 01:03:28 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:28.126754 | orchestrator | 2025-09-10 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:31.165645 | orchestrator | 2025-09-10 01:03:31 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:31.167594 | orchestrator | 2025-09-10 01:03:31 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:31.169802 | orchestrator | 2025-09-10 01:03:31 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:31.171835 | orchestrator | 2025-09-10 01:03:31 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:31.172160 | orchestrator | 2025-09-10 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:34.207482 | orchestrator | 2025-09-10 01:03:34 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:34.208438 | orchestrator | 2025-09-10 01:03:34 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:34.210612 | orchestrator | 2025-09-10 01:03:34 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:34.212381 | orchestrator | 2025-09-10 01:03:34 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:34.212951 | orchestrator | 2025-09-10 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:37.252642 | orchestrator | 2025-09-10 01:03:37 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:37.252965 | orchestrator | 2025-09-10 01:03:37 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:37.256106 | orchestrator | 2025-09-10 01:03:37 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:37.257092 | orchestrator | 2025-09-10 01:03:37 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:37.257117 | orchestrator | 2025-09-10 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:40.501535 | orchestrator | 2025-09-10 01:03:40 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:40.502212 | orchestrator | 2025-09-10 01:03:40 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:40.503099 | orchestrator | 2025-09-10 01:03:40 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:40.503993 | orchestrator | 2025-09-10 01:03:40 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:40.504010 | orchestrator | 2025-09-10 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:43.553598 | orchestrator | 2025-09-10 01:03:43 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:43.555315 | orchestrator | 2025-09-10 01:03:43 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:43.556260 | orchestrator | 2025-09-10 01:03:43 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:43.557421 | orchestrator | 2025-09-10 01:03:43 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:43.557440 | orchestrator | 2025-09-10 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:46.595806 | orchestrator | 2025-09-10 01:03:46 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:46.597181 | orchestrator | 2025-09-10 01:03:46 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:46.601010 | orchestrator | 2025-09-10 01:03:46 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:46.602096 | orchestrator | 2025-09-10 01:03:46 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:46.602149 | orchestrator | 2025-09-10 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:49.667819 | orchestrator | 2025-09-10 01:03:49 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:49.668951 | orchestrator | 2025-09-10 01:03:49 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state STARTED 2025-09-10 01:03:49.671415 | orchestrator | 2025-09-10 01:03:49 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:49.676522 | orchestrator | 2025-09-10 01:03:49 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:49.676557 | orchestrator | 2025-09-10 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:52.710649 | orchestrator | 2025-09-10 01:03:52 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:52.711240 | orchestrator | 2025-09-10 01:03:52 | INFO  | Task 7c694a03-2033-4e59-b064-6c92d4171981 is in state SUCCESS 2025-09-10 01:03:52.713242 | orchestrator | 2025-09-10 01:03:52.713316 | orchestrator | 2025-09-10 01:03:52.713331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:03:52.713343 | orchestrator | 2025-09-10 01:03:52.713355 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:03:52.713366 | orchestrator | Wednesday 10 September 2025 01:00:04 +0000 (0:00:00.357) 0:00:00.357 *** 2025-09-10 01:03:52.713377 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:03:52.713389 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:03:52.713400 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:03:52.713410 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:03:52.713435 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:03:52.713446 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:03:52.713457 | orchestrator | 2025-09-10 01:03:52.713467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:03:52.713478 | orchestrator | Wednesday 10 September 2025 01:00:05 +0000 (0:00:01.449) 0:00:01.806 *** 2025-09-10 01:03:52.713489 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-10 01:03:52.713533 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-10 01:03:52.713546 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-10 01:03:52.713556 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-10 01:03:52.713567 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-10 01:03:52.713577 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-10 01:03:52.713588 | orchestrator | 2025-09-10 01:03:52.713598 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-10 01:03:52.713609 | orchestrator | 2025-09-10 01:03:52.713620 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-10 01:03:52.713630 | orchestrator | Wednesday 10 September 2025 01:00:06 +0000 (0:00:00.658) 0:00:02.464 *** 2025-09-10 01:03:52.713641 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:03:52.713653 | orchestrator | 2025-09-10 01:03:52.713663 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-10 01:03:52.713674 | orchestrator | Wednesday 10 September 2025 01:00:07 +0000 (0:00:01.078) 0:00:03.542 *** 2025-09-10 01:03:52.713685 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-10 01:03:52.713695 | orchestrator | 2025-09-10 01:03:52.713706 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-10 01:03:52.713716 | orchestrator | Wednesday 10 September 2025 01:00:11 +0000 (0:00:03.683) 0:00:07.226 *** 2025-09-10 01:03:52.713727 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-10 01:03:52.713739 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-10 01:03:52.713770 | orchestrator | 2025-09-10 01:03:52.713781 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-10 01:03:52.713794 | orchestrator | Wednesday 10 September 2025 01:00:18 +0000 (0:00:07.799) 0:00:15.025 *** 2025-09-10 01:03:52.713807 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:03:52.713819 | orchestrator | 2025-09-10 01:03:52.713832 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-10 01:03:52.713844 | orchestrator | Wednesday 10 September 2025 01:00:22 +0000 (0:00:03.142) 0:00:18.167 *** 2025-09-10 01:03:52.713857 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:03:52.713869 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-10 01:03:52.713881 | orchestrator | 2025-09-10 01:03:52.713893 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-10 01:03:52.713906 | orchestrator | Wednesday 10 September 2025 01:00:26 +0000 (0:00:04.393) 0:00:22.561 *** 2025-09-10 01:03:52.713918 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:03:52.713930 | orchestrator | 2025-09-10 01:03:52.713942 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-10 01:03:52.713954 | orchestrator | Wednesday 10 September 2025 01:00:30 +0000 (0:00:04.216) 0:00:26.777 *** 2025-09-10 01:03:52.713967 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-10 01:03:52.713979 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-10 01:03:52.713992 | orchestrator | 2025-09-10 01:03:52.714004 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-10 01:03:52.714066 | orchestrator | Wednesday 10 September 2025 01:00:38 +0000 (0:00:08.009) 0:00:34.787 *** 2025-09-10 01:03:52.714083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.714124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.714139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.714161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.714286 | orchestrator | 2025-09-10 01:03:52.714303 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-10 01:03:52.714314 | orchestrator | Wednesday 10 September 2025 01:00:40 +0000 (0:00:02.057) 0:00:36.844 *** 2025-09-10 01:03:52.714325 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.714336 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.714346 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.714357 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.714368 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.714378 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.714389 | orchestrator | 2025-09-10 01:03:52.714405 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-10 01:03:52.714416 | orchestrator | Wednesday 10 September 2025 01:00:41 +0000 (0:00:00.519) 0:00:37.364 *** 2025-09-10 01:03:52.714426 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.714445 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.714456 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.714467 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:03:52.714478 | orchestrator | 2025-09-10 01:03:52.714488 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-10 01:03:52.714515 | orchestrator | Wednesday 10 September 2025 01:00:42 +0000 (0:00:00.818) 0:00:38.182 *** 2025-09-10 01:03:52.714526 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-10 01:03:52.714537 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-10 01:03:52.714548 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-10 01:03:52.714559 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-10 01:03:52.714569 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-10 01:03:52.714580 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-10 01:03:52.714591 | orchestrator | 2025-09-10 01:03:52.714601 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-10 01:03:52.714612 | orchestrator | Wednesday 10 September 2025 01:00:43 +0000 (0:00:01.795) 0:00:39.978 *** 2025-09-10 01:03:52.714624 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-10 01:03:52.714636 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-10 01:03:52.714648 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-10 01:03:52.714681 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-10 01:03:52.714700 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-10 01:03:52.714712 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-10 01:03:52.714724 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-10 01:03:52.714735 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-10 01:03:52.714757 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-10 01:03:52.714777 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-10 01:03:52.714789 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-10 01:03:52.714800 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-10 01:03:52.714811 | orchestrator | 2025-09-10 01:03:52.714822 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-10 01:03:52.714833 | orchestrator | Wednesday 10 September 2025 01:00:47 +0000 (0:00:03.903) 0:00:43.881 *** 2025-09-10 01:03:52.714844 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:03:52.714855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:03:52.714866 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-10 01:03:52.714877 | orchestrator | 2025-09-10 01:03:52.714888 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-10 01:03:52.714898 | orchestrator | Wednesday 10 September 2025 01:00:49 +0000 (0:00:02.169) 0:00:46.050 *** 2025-09-10 01:03:52.714916 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-10 01:03:52.714926 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-10 01:03:52.714937 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-10 01:03:52.714948 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-10 01:03:52.714959 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-10 01:03:52.714975 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-10 01:03:52.714986 | orchestrator | 2025-09-10 01:03:52.714997 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-10 01:03:52.715007 | orchestrator | Wednesday 10 September 2025 01:00:53 +0000 (0:00:03.220) 0:00:49.271 *** 2025-09-10 01:03:52.715018 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-10 01:03:52.715029 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-10 01:03:52.715039 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-10 01:03:52.715054 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-10 01:03:52.715065 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-10 01:03:52.715076 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-10 01:03:52.715086 | orchestrator | 2025-09-10 01:03:52.715097 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-10 01:03:52.715108 | orchestrator | Wednesday 10 September 2025 01:00:54 +0000 (0:00:01.110) 0:00:50.381 *** 2025-09-10 01:03:52.715118 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.715129 | orchestrator | 2025-09-10 01:03:52.715139 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-10 01:03:52.715150 | orchestrator | Wednesday 10 September 2025 01:00:54 +0000 (0:00:00.103) 0:00:50.485 *** 2025-09-10 01:03:52.715161 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.715171 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.715182 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.715192 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.715203 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.715213 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.715224 | orchestrator | 2025-09-10 01:03:52.715235 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-10 01:03:52.715245 | orchestrator | Wednesday 10 September 2025 01:00:55 +0000 (0:00:00.830) 0:00:51.316 *** 2025-09-10 01:03:52.715257 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:03:52.715268 | orchestrator | 2025-09-10 01:03:52.715279 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-10 01:03:52.715290 | orchestrator | Wednesday 10 September 2025 01:00:56 +0000 (0:00:01.090) 0:00:52.407 *** 2025-09-10 01:03:52.715301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.715313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.715344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.715360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715447 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.715487 | orchestrator | 2025-09-10 01:03:52.715515 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-10 01:03:52.715527 | orchestrator | Wednesday 10 September 2025 01:00:59 +0000 (0:00:03.015) 0:00:55.422 *** 2025-09-10 01:03:52.715539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.715556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.715583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715595 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.715606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.715624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715635 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.715646 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.715657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715691 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.715702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715731 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.715743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715765 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.715776 | orchestrator | 2025-09-10 01:03:52.715787 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-10 01:03:52.715798 | orchestrator | Wednesday 10 September 2025 01:01:01 +0000 (0:00:01.948) 0:00:57.371 *** 2025-09-10 01:03:52.715819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.715831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715842 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.715853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.715871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715882 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.715892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.715910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715921 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.715936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715965 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.715976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.715998 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.716014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716042 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.716053 | orchestrator | 2025-09-10 01:03:52.716064 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-10 01:03:52.716082 | orchestrator | Wednesday 10 September 2025 01:01:03 +0000 (0:00:01.964) 0:00:59.336 *** 2025-09-10 01:03:52.716093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.716105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.716116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.716152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716265 | orchestrator | 2025-09-10 01:03:52.716276 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-10 01:03:52.716287 | orchestrator | Wednesday 10 September 2025 01:01:06 +0000 (0:00:03.213) 0:01:02.549 *** 2025-09-10 01:03:52.716297 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-10 01:03:52.716309 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.716319 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-10 01:03:52.716330 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.716341 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-10 01:03:52.716352 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.716362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-10 01:03:52.716373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-10 01:03:52.716384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-10 01:03:52.716395 | orchestrator | 2025-09-10 01:03:52.716405 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-10 01:03:52.716416 | orchestrator | Wednesday 10 September 2025 01:01:08 +0000 (0:00:02.104) 0:01:04.654 *** 2025-09-10 01:03:52.716427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.716448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.716489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.716549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.716651 | orchestrator | 2025-09-10 01:03:52.716662 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-10 01:03:52.716673 | orchestrator | Wednesday 10 September 2025 01:01:19 +0000 (0:00:10.939) 0:01:15.593 *** 2025-09-10 01:03:52.716689 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.716701 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.716712 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.716723 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:03:52.716734 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:03:52.716744 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:03:52.716755 | orchestrator | 2025-09-10 01:03:52.716765 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-10 01:03:52.716776 | orchestrator | Wednesday 10 September 2025 01:01:21 +0000 (0:00:02.344) 0:01:17.937 *** 2025-09-10 01:03:52.716792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.716804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.716815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716838 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.716849 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.716874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-10 01:03:52.716890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716902 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.716913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716935 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.716947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.716980 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.717055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.717070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-10 01:03:52.717081 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.717092 | orchestrator | 2025-09-10 01:03:52.717103 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-10 01:03:52.717114 | orchestrator | Wednesday 10 September 2025 01:01:23 +0000 (0:00:01.847) 0:01:19.785 *** 2025-09-10 01:03:52.717125 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.717135 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.717146 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.717157 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.717168 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.717179 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.717190 | orchestrator | 2025-09-10 01:03:52.717201 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-10 01:03:52.717211 | orchestrator | Wednesday 10 September 2025 01:01:24 +0000 (0:00:01.037) 0:01:20.822 *** 2025-09-10 01:03:52.717223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.717242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.717268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-10 01:03:52.717280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-10 01:03:52.717404 | orchestrator | 2025-09-10 01:03:52.717415 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-10 01:03:52.717426 | orchestrator | Wednesday 10 September 2025 01:01:27 +0000 (0:00:02.812) 0:01:23.634 *** 2025-09-10 01:03:52.717437 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.717448 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:03:52.717459 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:03:52.717469 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:03:52.717480 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:03:52.717491 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:03:52.717520 | orchestrator | 2025-09-10 01:03:52.717532 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-10 01:03:52.717543 | orchestrator | Wednesday 10 September 2025 01:01:27 +0000 (0:00:00.471) 0:01:24.105 *** 2025-09-10 01:03:52.717554 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:52.717564 | orchestrator | 2025-09-10 01:03:52.717575 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-10 01:03:52.717586 | orchestrator | Wednesday 10 September 2025 01:01:30 +0000 (0:00:02.772) 0:01:26.878 *** 2025-09-10 01:03:52.717596 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:52.717607 | orchestrator | 2025-09-10 01:03:52.717618 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-10 01:03:52.717628 | orchestrator | Wednesday 10 September 2025 01:01:33 +0000 (0:00:02.346) 0:01:29.224 *** 2025-09-10 01:03:52.717639 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:52.717650 | orchestrator | 2025-09-10 01:03:52.717660 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-10 01:03:52.717671 | orchestrator | Wednesday 10 September 2025 01:01:50 +0000 (0:00:17.633) 0:01:46.858 *** 2025-09-10 01:03:52.717682 | orchestrator | 2025-09-10 01:03:52.717698 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-10 01:03:52.717709 | orchestrator | Wednesday 10 September 2025 01:01:50 +0000 (0:00:00.065) 0:01:46.923 *** 2025-09-10 01:03:52.717720 | orchestrator | 2025-09-10 01:03:52.717730 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-10 01:03:52.717741 | orchestrator | Wednesday 10 September 2025 01:01:50 +0000 (0:00:00.061) 0:01:46.984 *** 2025-09-10 01:03:52.717751 | orchestrator | 2025-09-10 01:03:52.717767 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-10 01:03:52.717778 | orchestrator | Wednesday 10 September 2025 01:01:50 +0000 (0:00:00.071) 0:01:47.056 *** 2025-09-10 01:03:52.717789 | orchestrator | 2025-09-10 01:03:52.717800 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-10 01:03:52.717810 | orchestrator | Wednesday 10 September 2025 01:01:51 +0000 (0:00:00.067) 0:01:47.124 *** 2025-09-10 01:03:52.717821 | orchestrator | 2025-09-10 01:03:52.717832 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-10 01:03:52.717842 | orchestrator | Wednesday 10 September 2025 01:01:51 +0000 (0:00:00.065) 0:01:47.189 *** 2025-09-10 01:03:52.717853 | orchestrator | 2025-09-10 01:03:52.717864 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-10 01:03:52.717875 | orchestrator | Wednesday 10 September 2025 01:01:51 +0000 (0:00:00.065) 0:01:47.255 *** 2025-09-10 01:03:52.717885 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:52.717896 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:52.717907 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:52.717924 | orchestrator | 2025-09-10 01:03:52.717935 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-10 01:03:52.717946 | orchestrator | Wednesday 10 September 2025 01:02:13 +0000 (0:00:22.667) 0:02:09.922 *** 2025-09-10 01:03:52.717957 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:03:52.717967 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:03:52.717978 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:03:52.717989 | orchestrator | 2025-09-10 01:03:52.717999 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-10 01:03:52.718010 | orchestrator | Wednesday 10 September 2025 01:02:24 +0000 (0:00:10.395) 0:02:20.318 *** 2025-09-10 01:03:52.718061 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:03:52.718072 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:03:52.718083 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:03:52.718097 | orchestrator | 2025-09-10 01:03:52.718108 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-10 01:03:52.718119 | orchestrator | Wednesday 10 September 2025 01:03:36 +0000 (0:01:12.652) 0:03:32.970 *** 2025-09-10 01:03:52.718130 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:03:52.718140 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:03:52.718151 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:03:52.718161 | orchestrator | 2025-09-10 01:03:52.718172 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-10 01:03:52.718183 | orchestrator | Wednesday 10 September 2025 01:03:49 +0000 (0:00:12.789) 0:03:45.760 *** 2025-09-10 01:03:52.718193 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:03:52.718204 | orchestrator | 2025-09-10 01:03:52.718214 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:03:52.718225 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-10 01:03:52.718237 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-10 01:03:52.718247 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-10 01:03:52.718258 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-10 01:03:52.718269 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-10 01:03:52.718279 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-10 01:03:52.718290 | orchestrator | 2025-09-10 01:03:52.718301 | orchestrator | 2025-09-10 01:03:52.718311 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:03:52.718322 | orchestrator | Wednesday 10 September 2025 01:03:50 +0000 (0:00:01.110) 0:03:46.870 *** 2025-09-10 01:03:52.718333 | orchestrator | =============================================================================== 2025-09-10 01:03:52.718343 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 72.65s 2025-09-10 01:03:52.718354 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.67s 2025-09-10 01:03:52.718364 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.63s 2025-09-10 01:03:52.718375 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.79s 2025-09-10 01:03:52.718385 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.94s 2025-09-10 01:03:52.718396 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.40s 2025-09-10 01:03:52.718407 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.01s 2025-09-10 01:03:52.718424 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.80s 2025-09-10 01:03:52.718441 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.39s 2025-09-10 01:03:52.718452 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.22s 2025-09-10 01:03:52.718463 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.90s 2025-09-10 01:03:52.718473 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.68s 2025-09-10 01:03:52.718484 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.22s 2025-09-10 01:03:52.718516 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.21s 2025-09-10 01:03:52.718528 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.14s 2025-09-10 01:03:52.718538 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.02s 2025-09-10 01:03:52.718549 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.81s 2025-09-10 01:03:52.718559 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.77s 2025-09-10 01:03:52.718570 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.35s 2025-09-10 01:03:52.718581 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.34s 2025-09-10 01:03:52.718591 | orchestrator | 2025-09-10 01:03:52 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:52.718602 | orchestrator | 2025-09-10 01:03:52 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:52.718613 | orchestrator | 2025-09-10 01:03:52 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:03:52.718624 | orchestrator | 2025-09-10 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:55.753200 | orchestrator | 2025-09-10 01:03:55 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:55.753667 | orchestrator | 2025-09-10 01:03:55 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:55.756846 | orchestrator | 2025-09-10 01:03:55 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:55.762004 | orchestrator | 2025-09-10 01:03:55 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:03:55.762073 | orchestrator | 2025-09-10 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:03:58.795753 | orchestrator | 2025-09-10 01:03:58 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:03:58.796873 | orchestrator | 2025-09-10 01:03:58 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:03:58.797603 | orchestrator | 2025-09-10 01:03:58 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:03:58.798359 | orchestrator | 2025-09-10 01:03:58 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:03:58.798380 | orchestrator | 2025-09-10 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:01.828064 | orchestrator | 2025-09-10 01:04:01 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:01.829346 | orchestrator | 2025-09-10 01:04:01 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:01.830751 | orchestrator | 2025-09-10 01:04:01 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:01.832310 | orchestrator | 2025-09-10 01:04:01 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:01.832716 | orchestrator | 2025-09-10 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:04.857198 | orchestrator | 2025-09-10 01:04:04 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:04.857299 | orchestrator | 2025-09-10 01:04:04 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:04.859641 | orchestrator | 2025-09-10 01:04:04 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:04.860004 | orchestrator | 2025-09-10 01:04:04 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:04.860101 | orchestrator | 2025-09-10 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:07.889016 | orchestrator | 2025-09-10 01:04:07 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:07.889693 | orchestrator | 2025-09-10 01:04:07 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:07.889958 | orchestrator | 2025-09-10 01:04:07 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:07.890724 | orchestrator | 2025-09-10 01:04:07 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:07.890959 | orchestrator | 2025-09-10 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:10.919327 | orchestrator | 2025-09-10 01:04:10 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:10.919630 | orchestrator | 2025-09-10 01:04:10 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:10.920209 | orchestrator | 2025-09-10 01:04:10 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:10.921053 | orchestrator | 2025-09-10 01:04:10 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:10.921075 | orchestrator | 2025-09-10 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:13.957308 | orchestrator | 2025-09-10 01:04:13 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:13.957391 | orchestrator | 2025-09-10 01:04:13 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:13.957859 | orchestrator | 2025-09-10 01:04:13 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:13.958742 | orchestrator | 2025-09-10 01:04:13 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:13.958768 | orchestrator | 2025-09-10 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:16.978484 | orchestrator | 2025-09-10 01:04:16 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:16.978702 | orchestrator | 2025-09-10 01:04:16 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:16.979086 | orchestrator | 2025-09-10 01:04:16 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:16.979642 | orchestrator | 2025-09-10 01:04:16 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:16.979666 | orchestrator | 2025-09-10 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:19.999783 | orchestrator | 2025-09-10 01:04:19 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:19.999877 | orchestrator | 2025-09-10 01:04:20 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:20.000326 | orchestrator | 2025-09-10 01:04:20 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:20.001053 | orchestrator | 2025-09-10 01:04:20 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:20.001141 | orchestrator | 2025-09-10 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:23.021588 | orchestrator | 2025-09-10 01:04:23 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:23.021964 | orchestrator | 2025-09-10 01:04:23 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:23.022455 | orchestrator | 2025-09-10 01:04:23 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:23.024545 | orchestrator | 2025-09-10 01:04:23 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:23.024640 | orchestrator | 2025-09-10 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:26.069269 | orchestrator | 2025-09-10 01:04:26 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:26.069603 | orchestrator | 2025-09-10 01:04:26 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:26.070218 | orchestrator | 2025-09-10 01:04:26 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:26.070839 | orchestrator | 2025-09-10 01:04:26 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:26.070996 | orchestrator | 2025-09-10 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:29.103679 | orchestrator | 2025-09-10 01:04:29 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:29.103856 | orchestrator | 2025-09-10 01:04:29 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:29.104469 | orchestrator | 2025-09-10 01:04:29 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:29.106388 | orchestrator | 2025-09-10 01:04:29 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:29.106465 | orchestrator | 2025-09-10 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:32.133458 | orchestrator | 2025-09-10 01:04:32 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:32.133590 | orchestrator | 2025-09-10 01:04:32 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:32.134642 | orchestrator | 2025-09-10 01:04:32 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:32.135452 | orchestrator | 2025-09-10 01:04:32 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:32.135543 | orchestrator | 2025-09-10 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:35.164432 | orchestrator | 2025-09-10 01:04:35 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:35.164941 | orchestrator | 2025-09-10 01:04:35 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:35.165742 | orchestrator | 2025-09-10 01:04:35 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:35.166427 | orchestrator | 2025-09-10 01:04:35 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:35.166448 | orchestrator | 2025-09-10 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:38.191393 | orchestrator | 2025-09-10 01:04:38 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:38.191645 | orchestrator | 2025-09-10 01:04:38 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:38.192422 | orchestrator | 2025-09-10 01:04:38 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:38.194307 | orchestrator | 2025-09-10 01:04:38 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:38.194327 | orchestrator | 2025-09-10 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:41.236888 | orchestrator | 2025-09-10 01:04:41 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:41.237226 | orchestrator | 2025-09-10 01:04:41 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:41.238072 | orchestrator | 2025-09-10 01:04:41 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:41.238764 | orchestrator | 2025-09-10 01:04:41 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:41.238783 | orchestrator | 2025-09-10 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:44.264687 | orchestrator | 2025-09-10 01:04:44 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:44.268834 | orchestrator | 2025-09-10 01:04:44 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:44.269028 | orchestrator | 2025-09-10 01:04:44 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:44.269799 | orchestrator | 2025-09-10 01:04:44 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:44.269882 | orchestrator | 2025-09-10 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:47.306624 | orchestrator | 2025-09-10 01:04:47 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:47.307329 | orchestrator | 2025-09-10 01:04:47 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:47.307926 | orchestrator | 2025-09-10 01:04:47 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:47.309016 | orchestrator | 2025-09-10 01:04:47 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:47.309035 | orchestrator | 2025-09-10 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:50.361985 | orchestrator | 2025-09-10 01:04:50 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:50.362590 | orchestrator | 2025-09-10 01:04:50 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:50.363624 | orchestrator | 2025-09-10 01:04:50 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:50.371222 | orchestrator | 2025-09-10 01:04:50 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:50.371257 | orchestrator | 2025-09-10 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:53.408982 | orchestrator | 2025-09-10 01:04:53 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:53.409342 | orchestrator | 2025-09-10 01:04:53 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:53.409925 | orchestrator | 2025-09-10 01:04:53 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:53.410379 | orchestrator | 2025-09-10 01:04:53 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:53.410613 | orchestrator | 2025-09-10 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:56.441744 | orchestrator | 2025-09-10 01:04:56 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:56.443533 | orchestrator | 2025-09-10 01:04:56 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:56.443931 | orchestrator | 2025-09-10 01:04:56 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:56.444992 | orchestrator | 2025-09-10 01:04:56 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:56.445027 | orchestrator | 2025-09-10 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:04:59.465930 | orchestrator | 2025-09-10 01:04:59 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:04:59.466068 | orchestrator | 2025-09-10 01:04:59 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:04:59.466904 | orchestrator | 2025-09-10 01:04:59 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:04:59.466928 | orchestrator | 2025-09-10 01:04:59 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:04:59.466941 | orchestrator | 2025-09-10 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:02.502090 | orchestrator | 2025-09-10 01:05:02 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:02.502192 | orchestrator | 2025-09-10 01:05:02 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:02.502571 | orchestrator | 2025-09-10 01:05:02 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:02.502985 | orchestrator | 2025-09-10 01:05:02 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:02.503008 | orchestrator | 2025-09-10 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:05.526846 | orchestrator | 2025-09-10 01:05:05 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:05.527069 | orchestrator | 2025-09-10 01:05:05 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:05.527768 | orchestrator | 2025-09-10 01:05:05 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:05.528583 | orchestrator | 2025-09-10 01:05:05 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:05.528606 | orchestrator | 2025-09-10 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:08.549348 | orchestrator | 2025-09-10 01:05:08 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:08.549824 | orchestrator | 2025-09-10 01:05:08 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:08.550444 | orchestrator | 2025-09-10 01:05:08 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:08.551064 | orchestrator | 2025-09-10 01:05:08 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:08.551072 | orchestrator | 2025-09-10 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:11.573377 | orchestrator | 2025-09-10 01:05:11 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:11.573750 | orchestrator | 2025-09-10 01:05:11 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:11.574232 | orchestrator | 2025-09-10 01:05:11 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:11.574860 | orchestrator | 2025-09-10 01:05:11 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:11.574977 | orchestrator | 2025-09-10 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:14.607544 | orchestrator | 2025-09-10 01:05:14 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:14.609795 | orchestrator | 2025-09-10 01:05:14 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:14.610433 | orchestrator | 2025-09-10 01:05:14 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:14.610909 | orchestrator | 2025-09-10 01:05:14 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:14.610928 | orchestrator | 2025-09-10 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:17.643665 | orchestrator | 2025-09-10 01:05:17 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:17.643769 | orchestrator | 2025-09-10 01:05:17 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:17.644054 | orchestrator | 2025-09-10 01:05:17 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:17.644597 | orchestrator | 2025-09-10 01:05:17 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:17.644620 | orchestrator | 2025-09-10 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:20.678357 | orchestrator | 2025-09-10 01:05:20 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:20.679641 | orchestrator | 2025-09-10 01:05:20 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state STARTED 2025-09-10 01:05:20.681033 | orchestrator | 2025-09-10 01:05:20 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:20.683838 | orchestrator | 2025-09-10 01:05:20 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:20.683883 | orchestrator | 2025-09-10 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:23.720795 | orchestrator | 2025-09-10 01:05:23 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:23.721014 | orchestrator | 2025-09-10 01:05:23 | INFO  | Task 730c46df-fde4-4c01-86d4-74fb9ba79436 is in state SUCCESS 2025-09-10 01:05:23.722848 | orchestrator | 2025-09-10 01:05:23.722956 | orchestrator | 2025-09-10 01:05:23.722973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:05:23.722986 | orchestrator | 2025-09-10 01:05:23.722997 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:05:23.723009 | orchestrator | Wednesday 10 September 2025 01:03:22 +0000 (0:00:00.287) 0:00:00.287 *** 2025-09-10 01:05:23.723020 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:05:23.723032 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:05:23.723044 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:05:23.723055 | orchestrator | 2025-09-10 01:05:23.723066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:05:23.723077 | orchestrator | Wednesday 10 September 2025 01:03:22 +0000 (0:00:00.287) 0:00:00.575 *** 2025-09-10 01:05:23.723102 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-10 01:05:23.723115 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-10 01:05:23.723126 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-10 01:05:23.723137 | orchestrator | 2025-09-10 01:05:23.723149 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-10 01:05:23.723160 | orchestrator | 2025-09-10 01:05:23.723172 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-10 01:05:23.723209 | orchestrator | Wednesday 10 September 2025 01:03:22 +0000 (0:00:00.399) 0:00:00.974 *** 2025-09-10 01:05:23.723222 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:05:23.723234 | orchestrator | 2025-09-10 01:05:23.723245 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-10 01:05:23.723257 | orchestrator | Wednesday 10 September 2025 01:03:23 +0000 (0:00:00.526) 0:00:01.500 *** 2025-09-10 01:05:23.723269 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-10 01:05:23.723280 | orchestrator | 2025-09-10 01:05:23.723291 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-10 01:05:23.723302 | orchestrator | Wednesday 10 September 2025 01:03:26 +0000 (0:00:03.323) 0:00:04.824 *** 2025-09-10 01:05:23.723313 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-10 01:05:23.723325 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-10 01:05:23.723336 | orchestrator | 2025-09-10 01:05:23.723348 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-10 01:05:23.723359 | orchestrator | Wednesday 10 September 2025 01:03:33 +0000 (0:00:06.700) 0:00:11.524 *** 2025-09-10 01:05:23.723370 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:05:23.723382 | orchestrator | 2025-09-10 01:05:23.723393 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-10 01:05:23.723404 | orchestrator | Wednesday 10 September 2025 01:03:36 +0000 (0:00:03.537) 0:00:15.062 *** 2025-09-10 01:05:23.723416 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:05:23.723427 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-10 01:05:23.723438 | orchestrator | 2025-09-10 01:05:23.723450 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-10 01:05:23.723461 | orchestrator | Wednesday 10 September 2025 01:03:40 +0000 (0:00:04.064) 0:00:19.126 *** 2025-09-10 01:05:23.723472 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:05:23.723484 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-10 01:05:23.723519 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-10 01:05:23.723544 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-10 01:05:23.723556 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-10 01:05:23.723567 | orchestrator | 2025-09-10 01:05:23.723578 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-10 01:05:23.723588 | orchestrator | Wednesday 10 September 2025 01:03:56 +0000 (0:00:16.020) 0:00:35.146 *** 2025-09-10 01:05:23.723599 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-10 01:05:23.723610 | orchestrator | 2025-09-10 01:05:23.723620 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-10 01:05:23.723631 | orchestrator | Wednesday 10 September 2025 01:04:01 +0000 (0:00:04.186) 0:00:39.332 *** 2025-09-10 01:05:23.723646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.723687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.723701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.723713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.723732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.723744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.723762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.723782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.723793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.723804 | orchestrator | 2025-09-10 01:05:23.723816 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-10 01:05:23.723827 | orchestrator | Wednesday 10 September 2025 01:04:02 +0000 (0:00:01.834) 0:00:41.166 *** 2025-09-10 01:05:23.723837 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-10 01:05:23.723848 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-10 01:05:23.723859 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-10 01:05:23.723870 | orchestrator | 2025-09-10 01:05:23.723881 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-10 01:05:23.723892 | orchestrator | Wednesday 10 September 2025 01:04:04 +0000 (0:00:01.500) 0:00:42.667 *** 2025-09-10 01:05:23.723902 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.723913 | orchestrator | 2025-09-10 01:05:23.723924 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-10 01:05:23.723935 | orchestrator | Wednesday 10 September 2025 01:04:04 +0000 (0:00:00.231) 0:00:42.898 *** 2025-09-10 01:05:23.723946 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.723957 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:05:23.723967 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:05:23.723978 | orchestrator | 2025-09-10 01:05:23.723989 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-10 01:05:23.724000 | orchestrator | Wednesday 10 September 2025 01:04:06 +0000 (0:00:01.415) 0:00:44.314 *** 2025-09-10 01:05:23.724010 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:05:23.724021 | orchestrator | 2025-09-10 01:05:23.724032 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-10 01:05:23.724048 | orchestrator | Wednesday 10 September 2025 01:04:07 +0000 (0:00:01.354) 0:00:45.668 *** 2025-09-10 01:05:23.724059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.724087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.724099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.724111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.724127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.724139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.724157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.724176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.724188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.724199 | orchestrator | 2025-09-10 01:05:23.724209 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-10 01:05:23.724220 | orchestrator | Wednesday 10 September 2025 01:04:11 +0000 (0:00:03.708) 0:00:49.377 *** 2025-09-10 01:05:23.724232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.724248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724278 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.724296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.724308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724330 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:05:23.724341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.724358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724387 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:05:23.724398 | orchestrator | 2025-09-10 01:05:23.724409 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-10 01:05:23.724419 | orchestrator | Wednesday 10 September 2025 01:04:13 +0000 (0:00:02.576) 0:00:51.953 *** 2025-09-10 01:05:23.724438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.724450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.724489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724517 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:05:23.724529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724552 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:05:23.724571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.724583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.724612 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.724623 | orchestrator | 2025-09-10 01:05:23.724634 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-10 01:05:23.724645 | orchestrator | Wednesday 10 September 2025 01:04:14 +0000 (0:00:00.994) 0:00:52.948 *** 2025-09-10 01:05:23.724662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.724954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.724974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.724985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725074 | orchestrator | 2025-09-10 01:05:23.725085 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-10 01:05:23.725096 | orchestrator | Wednesday 10 September 2025 01:04:18 +0000 (0:00:03.695) 0:00:56.643 *** 2025-09-10 01:05:23.725107 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.725118 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:05:23.725129 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:05:23.725140 | orchestrator | 2025-09-10 01:05:23.725150 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-10 01:05:23.725161 | orchestrator | Wednesday 10 September 2025 01:04:20 +0000 (0:00:02.385) 0:00:59.029 *** 2025-09-10 01:05:23.725172 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:05:23.725183 | orchestrator | 2025-09-10 01:05:23.725194 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-10 01:05:23.725211 | orchestrator | Wednesday 10 September 2025 01:04:21 +0000 (0:00:00.947) 0:00:59.976 *** 2025-09-10 01:05:23.725221 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.725232 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:05:23.725243 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:05:23.725254 | orchestrator | 2025-09-10 01:05:23.725264 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-10 01:05:23.725275 | orchestrator | Wednesday 10 September 2025 01:04:22 +0000 (0:00:01.203) 0:01:01.180 *** 2025-09-10 01:05:23.725286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.725303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.725321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.725332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725412 | orchestrator | 2025-09-10 01:05:23.725423 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-10 01:05:23.725434 | orchestrator | Wednesday 10 September 2025 01:04:32 +0000 (0:00:09.414) 0:01:10.594 *** 2025-09-10 01:05:23.725451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.725482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.725493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.725562 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.725582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.725597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.725616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.725630 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:05:23.725644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-10 01:05:23.725664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.725677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:05:23.725690 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:05:23.725703 | orchestrator | 2025-09-10 01:05:23.725718 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-10 01:05:23.725730 | orchestrator | Wednesday 10 September 2025 01:04:33 +0000 (0:00:00.977) 0:01:11.572 *** 2025-09-10 01:05:23.725741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.725758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.725776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-10 01:05:23.725787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:05:23.725878 | orchestrator | 2025-09-10 01:05:23.725889 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-10 01:05:23.725900 | orchestrator | Wednesday 10 September 2025 01:04:36 +0000 (0:00:03.239) 0:01:14.812 *** 2025-09-10 01:05:23.725911 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:05:23.725921 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:05:23.725932 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:05:23.725943 | orchestrator | 2025-09-10 01:05:23.725954 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-10 01:05:23.725964 | orchestrator | Wednesday 10 September 2025 01:04:37 +0000 (0:00:00.495) 0:01:15.307 *** 2025-09-10 01:05:23.725975 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.725986 | orchestrator | 2025-09-10 01:05:23.725996 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-10 01:05:23.726007 | orchestrator | Wednesday 10 September 2025 01:04:39 +0000 (0:00:02.291) 0:01:17.599 *** 2025-09-10 01:05:23.726063 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.726078 | orchestrator | 2025-09-10 01:05:23.726089 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-10 01:05:23.726100 | orchestrator | Wednesday 10 September 2025 01:04:41 +0000 (0:00:02.462) 0:01:20.061 *** 2025-09-10 01:05:23.726110 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.726121 | orchestrator | 2025-09-10 01:05:23.726132 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-10 01:05:23.726142 | orchestrator | Wednesday 10 September 2025 01:04:52 +0000 (0:00:10.817) 0:01:30.879 *** 2025-09-10 01:05:23.726153 | orchestrator | 2025-09-10 01:05:23.726164 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-10 01:05:23.726174 | orchestrator | Wednesday 10 September 2025 01:04:52 +0000 (0:00:00.258) 0:01:31.137 *** 2025-09-10 01:05:23.726184 | orchestrator | 2025-09-10 01:05:23.726195 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-10 01:05:23.726206 | orchestrator | Wednesday 10 September 2025 01:04:53 +0000 (0:00:00.265) 0:01:31.403 *** 2025-09-10 01:05:23.726216 | orchestrator | 2025-09-10 01:05:23.726227 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-10 01:05:23.726237 | orchestrator | Wednesday 10 September 2025 01:04:53 +0000 (0:00:00.267) 0:01:31.670 *** 2025-09-10 01:05:23.726248 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.726264 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:05:23.726275 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:05:23.726286 | orchestrator | 2025-09-10 01:05:23.726296 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-10 01:05:23.726307 | orchestrator | Wednesday 10 September 2025 01:05:07 +0000 (0:00:13.937) 0:01:45.607 *** 2025-09-10 01:05:23.726318 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.726328 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:05:23.726339 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:05:23.726357 | orchestrator | 2025-09-10 01:05:23.726368 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-10 01:05:23.726379 | orchestrator | Wednesday 10 September 2025 01:05:14 +0000 (0:00:07.246) 0:01:52.854 *** 2025-09-10 01:05:23.726389 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:05:23.726400 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:05:23.726411 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:05:23.726421 | orchestrator | 2025-09-10 01:05:23.726432 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:05:23.726444 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-10 01:05:23.726457 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 01:05:23.726468 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 01:05:23.726479 | orchestrator | 2025-09-10 01:05:23.726489 | orchestrator | 2025-09-10 01:05:23.726518 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:05:23.726529 | orchestrator | Wednesday 10 September 2025 01:05:21 +0000 (0:00:07.133) 0:01:59.988 *** 2025-09-10 01:05:23.726540 | orchestrator | =============================================================================== 2025-09-10 01:05:23.726551 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.02s 2025-09-10 01:05:23.726569 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.94s 2025-09-10 01:05:23.726580 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.82s 2025-09-10 01:05:23.726591 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.41s 2025-09-10 01:05:23.726601 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.25s 2025-09-10 01:05:23.726612 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.13s 2025-09-10 01:05:23.726622 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.70s 2025-09-10 01:05:23.726633 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.19s 2025-09-10 01:05:23.726643 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.06s 2025-09-10 01:05:23.726654 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.71s 2025-09-10 01:05:23.726664 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.70s 2025-09-10 01:05:23.726675 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.54s 2025-09-10 01:05:23.726685 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.32s 2025-09-10 01:05:23.726696 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.24s 2025-09-10 01:05:23.726707 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.58s 2025-09-10 01:05:23.726717 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.46s 2025-09-10 01:05:23.726728 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.39s 2025-09-10 01:05:23.726739 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.29s 2025-09-10 01:05:23.726749 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.83s 2025-09-10 01:05:23.726760 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.50s 2025-09-10 01:05:23.726770 | orchestrator | 2025-09-10 01:05:23 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:23.726781 | orchestrator | 2025-09-10 01:05:23 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:23.726792 | orchestrator | 2025-09-10 01:05:23 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:23.726811 | orchestrator | 2025-09-10 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:26.750296 | orchestrator | 2025-09-10 01:05:26 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:26.750672 | orchestrator | 2025-09-10 01:05:26 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:26.751649 | orchestrator | 2025-09-10 01:05:26 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:26.752185 | orchestrator | 2025-09-10 01:05:26 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:26.752234 | orchestrator | 2025-09-10 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:29.780634 | orchestrator | 2025-09-10 01:05:29 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:29.782219 | orchestrator | 2025-09-10 01:05:29 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:29.783992 | orchestrator | 2025-09-10 01:05:29 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:29.785850 | orchestrator | 2025-09-10 01:05:29 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:29.786067 | orchestrator | 2025-09-10 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:32.836207 | orchestrator | 2025-09-10 01:05:32 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:32.839731 | orchestrator | 2025-09-10 01:05:32 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:32.842652 | orchestrator | 2025-09-10 01:05:32 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:32.844365 | orchestrator | 2025-09-10 01:05:32 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:32.845050 | orchestrator | 2025-09-10 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:35.886483 | orchestrator | 2025-09-10 01:05:35 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:35.886680 | orchestrator | 2025-09-10 01:05:35 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:35.889365 | orchestrator | 2025-09-10 01:05:35 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:35.893045 | orchestrator | 2025-09-10 01:05:35 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:35.894131 | orchestrator | 2025-09-10 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:38.934415 | orchestrator | 2025-09-10 01:05:38 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:38.936126 | orchestrator | 2025-09-10 01:05:38 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:38.940067 | orchestrator | 2025-09-10 01:05:38 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:38.942364 | orchestrator | 2025-09-10 01:05:38 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:38.942387 | orchestrator | 2025-09-10 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:41.992123 | orchestrator | 2025-09-10 01:05:41 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:41.993114 | orchestrator | 2025-09-10 01:05:41 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:41.994993 | orchestrator | 2025-09-10 01:05:41 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:41.998058 | orchestrator | 2025-09-10 01:05:41 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:41.998088 | orchestrator | 2025-09-10 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:45.045304 | orchestrator | 2025-09-10 01:05:45 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:45.045393 | orchestrator | 2025-09-10 01:05:45 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:45.045405 | orchestrator | 2025-09-10 01:05:45 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:45.045992 | orchestrator | 2025-09-10 01:05:45 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:45.046056 | orchestrator | 2025-09-10 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:48.086851 | orchestrator | 2025-09-10 01:05:48 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:48.088605 | orchestrator | 2025-09-10 01:05:48 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:48.090879 | orchestrator | 2025-09-10 01:05:48 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:48.092779 | orchestrator | 2025-09-10 01:05:48 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:48.093052 | orchestrator | 2025-09-10 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:51.128798 | orchestrator | 2025-09-10 01:05:51 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:51.130325 | orchestrator | 2025-09-10 01:05:51 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:51.133133 | orchestrator | 2025-09-10 01:05:51 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:51.135285 | orchestrator | 2025-09-10 01:05:51 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:51.135310 | orchestrator | 2025-09-10 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:54.172202 | orchestrator | 2025-09-10 01:05:54 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:54.173254 | orchestrator | 2025-09-10 01:05:54 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:54.174116 | orchestrator | 2025-09-10 01:05:54 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:54.175534 | orchestrator | 2025-09-10 01:05:54 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:54.175559 | orchestrator | 2025-09-10 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:05:57.231183 | orchestrator | 2025-09-10 01:05:57 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:05:57.231718 | orchestrator | 2025-09-10 01:05:57 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:05:57.232634 | orchestrator | 2025-09-10 01:05:57 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:05:57.233471 | orchestrator | 2025-09-10 01:05:57 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:05:57.233524 | orchestrator | 2025-09-10 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:00.273799 | orchestrator | 2025-09-10 01:06:00 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:00.276030 | orchestrator | 2025-09-10 01:06:00 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:00.276696 | orchestrator | 2025-09-10 01:06:00 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:00.277538 | orchestrator | 2025-09-10 01:06:00 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:00.279296 | orchestrator | 2025-09-10 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:03.326933 | orchestrator | 2025-09-10 01:06:03 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:03.329249 | orchestrator | 2025-09-10 01:06:03 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:03.332759 | orchestrator | 2025-09-10 01:06:03 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:03.335694 | orchestrator | 2025-09-10 01:06:03 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:03.335916 | orchestrator | 2025-09-10 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:06.375137 | orchestrator | 2025-09-10 01:06:06 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:06.377441 | orchestrator | 2025-09-10 01:06:06 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:06.378546 | orchestrator | 2025-09-10 01:06:06 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:06.379368 | orchestrator | 2025-09-10 01:06:06 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:06.379401 | orchestrator | 2025-09-10 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:09.411595 | orchestrator | 2025-09-10 01:06:09 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:09.411802 | orchestrator | 2025-09-10 01:06:09 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:09.412388 | orchestrator | 2025-09-10 01:06:09 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:09.413183 | orchestrator | 2025-09-10 01:06:09 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:09.413205 | orchestrator | 2025-09-10 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:12.438771 | orchestrator | 2025-09-10 01:06:12 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:12.438942 | orchestrator | 2025-09-10 01:06:12 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:12.439031 | orchestrator | 2025-09-10 01:06:12 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:12.439649 | orchestrator | 2025-09-10 01:06:12 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:12.439671 | orchestrator | 2025-09-10 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:15.525827 | orchestrator | 2025-09-10 01:06:15 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:15.526001 | orchestrator | 2025-09-10 01:06:15 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:15.527231 | orchestrator | 2025-09-10 01:06:15 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:15.529063 | orchestrator | 2025-09-10 01:06:15 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:15.529107 | orchestrator | 2025-09-10 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:18.563740 | orchestrator | 2025-09-10 01:06:18 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:18.563845 | orchestrator | 2025-09-10 01:06:18 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:18.564279 | orchestrator | 2025-09-10 01:06:18 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:18.564930 | orchestrator | 2025-09-10 01:06:18 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:18.564951 | orchestrator | 2025-09-10 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:21.617424 | orchestrator | 2025-09-10 01:06:21 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:21.618631 | orchestrator | 2025-09-10 01:06:21 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:21.619377 | orchestrator | 2025-09-10 01:06:21 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:21.620118 | orchestrator | 2025-09-10 01:06:21 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:21.620141 | orchestrator | 2025-09-10 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:24.661614 | orchestrator | 2025-09-10 01:06:24 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:24.664877 | orchestrator | 2025-09-10 01:06:24 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:24.666148 | orchestrator | 2025-09-10 01:06:24 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:24.668866 | orchestrator | 2025-09-10 01:06:24 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:24.668920 | orchestrator | 2025-09-10 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:27.750003 | orchestrator | 2025-09-10 01:06:27 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:27.750671 | orchestrator | 2025-09-10 01:06:27 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:27.751650 | orchestrator | 2025-09-10 01:06:27 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state STARTED 2025-09-10 01:06:27.752248 | orchestrator | 2025-09-10 01:06:27 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:27.752271 | orchestrator | 2025-09-10 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:30.786003 | orchestrator | 2025-09-10 01:06:30 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:30.786230 | orchestrator | 2025-09-10 01:06:30 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:30.786764 | orchestrator | 2025-09-10 01:06:30 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:30.787184 | orchestrator | 2025-09-10 01:06:30 | INFO  | Task 201dbecb-eedb-4210-b7c8-2205a587c952 is in state SUCCESS 2025-09-10 01:06:30.787781 | orchestrator | 2025-09-10 01:06:30 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:30.787804 | orchestrator | 2025-09-10 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:33.816757 | orchestrator | 2025-09-10 01:06:33 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:33.817760 | orchestrator | 2025-09-10 01:06:33 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:33.818223 | orchestrator | 2025-09-10 01:06:33 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:33.818744 | orchestrator | 2025-09-10 01:06:33 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:33.818767 | orchestrator | 2025-09-10 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:36.857664 | orchestrator | 2025-09-10 01:06:36 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:36.859028 | orchestrator | 2025-09-10 01:06:36 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:36.859071 | orchestrator | 2025-09-10 01:06:36 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:36.860752 | orchestrator | 2025-09-10 01:06:36 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:36.860785 | orchestrator | 2025-09-10 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:39.888362 | orchestrator | 2025-09-10 01:06:39 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:39.889027 | orchestrator | 2025-09-10 01:06:39 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:39.889313 | orchestrator | 2025-09-10 01:06:39 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:39.889857 | orchestrator | 2025-09-10 01:06:39 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:39.889974 | orchestrator | 2025-09-10 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:42.918604 | orchestrator | 2025-09-10 01:06:42 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:42.918864 | orchestrator | 2025-09-10 01:06:42 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:42.920060 | orchestrator | 2025-09-10 01:06:42 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:42.920088 | orchestrator | 2025-09-10 01:06:42 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:42.920100 | orchestrator | 2025-09-10 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:45.953645 | orchestrator | 2025-09-10 01:06:45 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:45.955981 | orchestrator | 2025-09-10 01:06:45 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:45.957715 | orchestrator | 2025-09-10 01:06:45 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:45.960171 | orchestrator | 2025-09-10 01:06:45 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:45.960204 | orchestrator | 2025-09-10 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:49.020137 | orchestrator | 2025-09-10 01:06:49 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:49.022778 | orchestrator | 2025-09-10 01:06:49 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:49.025184 | orchestrator | 2025-09-10 01:06:49 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:49.025216 | orchestrator | 2025-09-10 01:06:49 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:49.025230 | orchestrator | 2025-09-10 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:52.091309 | orchestrator | 2025-09-10 01:06:52 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:52.092661 | orchestrator | 2025-09-10 01:06:52 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:52.093943 | orchestrator | 2025-09-10 01:06:52 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:52.095391 | orchestrator | 2025-09-10 01:06:52 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:52.095663 | orchestrator | 2025-09-10 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:55.153564 | orchestrator | 2025-09-10 01:06:55 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:55.154594 | orchestrator | 2025-09-10 01:06:55 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:55.156070 | orchestrator | 2025-09-10 01:06:55 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:55.157896 | orchestrator | 2025-09-10 01:06:55 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:55.157926 | orchestrator | 2025-09-10 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:06:58.204417 | orchestrator | 2025-09-10 01:06:58 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:06:58.204782 | orchestrator | 2025-09-10 01:06:58 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:06:58.206441 | orchestrator | 2025-09-10 01:06:58 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:06:58.207867 | orchestrator | 2025-09-10 01:06:58 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:06:58.207889 | orchestrator | 2025-09-10 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:01.264831 | orchestrator | 2025-09-10 01:07:01 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:01.264935 | orchestrator | 2025-09-10 01:07:01 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:01.264951 | orchestrator | 2025-09-10 01:07:01 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:07:01.264962 | orchestrator | 2025-09-10 01:07:01 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:07:01.264974 | orchestrator | 2025-09-10 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:04.298719 | orchestrator | 2025-09-10 01:07:04 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:04.298889 | orchestrator | 2025-09-10 01:07:04 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:04.299259 | orchestrator | 2025-09-10 01:07:04 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state STARTED 2025-09-10 01:07:04.300356 | orchestrator | 2025-09-10 01:07:04 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:07:04.300460 | orchestrator | 2025-09-10 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:07.343835 | orchestrator | 2025-09-10 01:07:07 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:07.345366 | orchestrator | 2025-09-10 01:07:07 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:07.349240 | orchestrator | 2025-09-10 01:07:07 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:07.357256 | orchestrator | 2025-09-10 01:07:07 | INFO  | Task 3e89403b-714d-42e6-9845-37eb925a6141 is in state SUCCESS 2025-09-10 01:07:07.360060 | orchestrator | 2025-09-10 01:07:07.360094 | orchestrator | 2025-09-10 01:07:07.360106 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-10 01:07:07.360117 | orchestrator | 2025-09-10 01:07:07.360128 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-10 01:07:07.360139 | orchestrator | Wednesday 10 September 2025 01:05:27 +0000 (0:00:00.254) 0:00:00.255 *** 2025-09-10 01:07:07.360150 | orchestrator | changed: [localhost] 2025-09-10 01:07:07.360162 | orchestrator | 2025-09-10 01:07:07.360173 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-10 01:07:07.360184 | orchestrator | Wednesday 10 September 2025 01:05:29 +0000 (0:00:01.117) 0:00:01.372 *** 2025-09-10 01:07:07.360282 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-10 01:07:07.360317 | orchestrator | changed: [localhost] 2025-09-10 01:07:07.360328 | orchestrator | 2025-09-10 01:07:07.360339 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-10 01:07:07.360350 | orchestrator | Wednesday 10 September 2025 01:06:21 +0000 (0:00:52.664) 0:00:54.037 *** 2025-09-10 01:07:07.360361 | orchestrator | changed: [localhost] 2025-09-10 01:07:07.360398 | orchestrator | 2025-09-10 01:07:07.360411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:07:07.360422 | orchestrator | 2025-09-10 01:07:07.360433 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:07:07.360444 | orchestrator | Wednesday 10 September 2025 01:06:26 +0000 (0:00:04.495) 0:00:58.532 *** 2025-09-10 01:07:07.360454 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:07.360465 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:07.360618 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:07.360642 | orchestrator | 2025-09-10 01:07:07.360665 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:07:07.360679 | orchestrator | Wednesday 10 September 2025 01:06:26 +0000 (0:00:00.349) 0:00:58.882 *** 2025-09-10 01:07:07.360693 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-10 01:07:07.360706 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-10 01:07:07.360719 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-10 01:07:07.360747 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-10 01:07:07.360760 | orchestrator | 2025-09-10 01:07:07.360773 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-10 01:07:07.360785 | orchestrator | skipping: no hosts matched 2025-09-10 01:07:07.360799 | orchestrator | 2025-09-10 01:07:07.360812 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:07:07.360825 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:07:07.360840 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:07:07.360855 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:07:07.360868 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:07:07.360880 | orchestrator | 2025-09-10 01:07:07.360893 | orchestrator | 2025-09-10 01:07:07.360907 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:07:07.360920 | orchestrator | Wednesday 10 September 2025 01:06:27 +0000 (0:00:00.738) 0:00:59.620 *** 2025-09-10 01:07:07.360933 | orchestrator | =============================================================================== 2025-09-10 01:07:07.360946 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 52.66s 2025-09-10 01:07:07.360973 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.50s 2025-09-10 01:07:07.360986 | orchestrator | Ensure the destination directory exists --------------------------------- 1.12s 2025-09-10 01:07:07.360999 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-09-10 01:07:07.361013 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-09-10 01:07:07.361025 | orchestrator | 2025-09-10 01:07:07.361036 | orchestrator | 2025-09-10 01:07:07.361047 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:07:07.361058 | orchestrator | 2025-09-10 01:07:07.361068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:07:07.361079 | orchestrator | Wednesday 10 September 2025 01:02:54 +0000 (0:00:00.276) 0:00:00.276 *** 2025-09-10 01:07:07.361090 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:07.361100 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:07.361111 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:07.361122 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:07:07.361133 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:07:07.361143 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:07:07.361154 | orchestrator | 2025-09-10 01:07:07.361165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:07:07.361176 | orchestrator | Wednesday 10 September 2025 01:02:55 +0000 (0:00:00.902) 0:00:01.179 *** 2025-09-10 01:07:07.361186 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-10 01:07:07.361197 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-10 01:07:07.361208 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-10 01:07:07.361219 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-10 01:07:07.361229 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-10 01:07:07.361240 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-10 01:07:07.361251 | orchestrator | 2025-09-10 01:07:07.361261 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-10 01:07:07.361272 | orchestrator | 2025-09-10 01:07:07.361283 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-10 01:07:07.361304 | orchestrator | Wednesday 10 September 2025 01:02:56 +0000 (0:00:01.227) 0:00:02.407 *** 2025-09-10 01:07:07.361316 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:07:07.361327 | orchestrator | 2025-09-10 01:07:07.361337 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-10 01:07:07.361348 | orchestrator | Wednesday 10 September 2025 01:02:58 +0000 (0:00:01.454) 0:00:03.861 *** 2025-09-10 01:07:07.361359 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:07.361370 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:07.361380 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:07:07.361391 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:07:07.361401 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:07.361412 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:07:07.361422 | orchestrator | 2025-09-10 01:07:07.361433 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-10 01:07:07.361444 | orchestrator | Wednesday 10 September 2025 01:02:59 +0000 (0:00:01.297) 0:00:05.158 *** 2025-09-10 01:07:07.361454 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:07.361465 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:07.361475 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:07.361504 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:07:07.361516 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:07:07.361526 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:07:07.361537 | orchestrator | 2025-09-10 01:07:07.361548 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-10 01:07:07.361558 | orchestrator | Wednesday 10 September 2025 01:03:00 +0000 (0:00:01.124) 0:00:06.282 *** 2025-09-10 01:07:07.361577 | orchestrator | ok: [testbed-node-0] => { 2025-09-10 01:07:07.361588 | orchestrator |  "changed": false, 2025-09-10 01:07:07.361599 | orchestrator |  "msg": "All assertions passed" 2025-09-10 01:07:07.361610 | orchestrator | } 2025-09-10 01:07:07.361621 | orchestrator | ok: [testbed-node-1] => { 2025-09-10 01:07:07.361632 | orchestrator |  "changed": false, 2025-09-10 01:07:07.361642 | orchestrator |  "msg": "All assertions passed" 2025-09-10 01:07:07.361653 | orchestrator | } 2025-09-10 01:07:07.361663 | orchestrator | ok: [testbed-node-2] => { 2025-09-10 01:07:07.361674 | orchestrator |  "changed": false, 2025-09-10 01:07:07.361685 | orchestrator |  "msg": "All assertions passed" 2025-09-10 01:07:07.361701 | orchestrator | } 2025-09-10 01:07:07.361712 | orchestrator | ok: [testbed-node-3] => { 2025-09-10 01:07:07.361722 | orchestrator |  "changed": false, 2025-09-10 01:07:07.361733 | orchestrator |  "msg": "All assertions passed" 2025-09-10 01:07:07.361744 | orchestrator | } 2025-09-10 01:07:07.361754 | orchestrator | ok: [testbed-node-4] => { 2025-09-10 01:07:07.361765 | orchestrator |  "changed": false, 2025-09-10 01:07:07.361775 | orchestrator |  "msg": "All assertions passed" 2025-09-10 01:07:07.361786 | orchestrator | } 2025-09-10 01:07:07.361797 | orchestrator | ok: [testbed-node-5] => { 2025-09-10 01:07:07.361807 | orchestrator |  "changed": false, 2025-09-10 01:07:07.361818 | orchestrator |  "msg": "All assertions passed" 2025-09-10 01:07:07.361828 | orchestrator | } 2025-09-10 01:07:07.361839 | orchestrator | 2025-09-10 01:07:07.361850 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-10 01:07:07.361860 | orchestrator | Wednesday 10 September 2025 01:03:01 +0000 (0:00:00.877) 0:00:07.160 *** 2025-09-10 01:07:07.361871 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.361882 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.361892 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.361903 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.361913 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.361924 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.361935 | orchestrator | 2025-09-10 01:07:07.361945 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-10 01:07:07.361956 | orchestrator | Wednesday 10 September 2025 01:03:01 +0000 (0:00:00.596) 0:00:07.756 *** 2025-09-10 01:07:07.361967 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-10 01:07:07.361977 | orchestrator | 2025-09-10 01:07:07.361988 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-10 01:07:07.361999 | orchestrator | Wednesday 10 September 2025 01:03:05 +0000 (0:00:03.880) 0:00:11.637 *** 2025-09-10 01:07:07.362010 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-10 01:07:07.362073 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-10 01:07:07.362085 | orchestrator | 2025-09-10 01:07:07.362095 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-10 01:07:07.362106 | orchestrator | Wednesday 10 September 2025 01:03:12 +0000 (0:00:06.409) 0:00:18.047 *** 2025-09-10 01:07:07.362117 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:07:07.362127 | orchestrator | 2025-09-10 01:07:07.362138 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-10 01:07:07.362148 | orchestrator | Wednesday 10 September 2025 01:03:15 +0000 (0:00:03.343) 0:00:21.390 *** 2025-09-10 01:07:07.362159 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:07:07.362170 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-10 01:07:07.362180 | orchestrator | 2025-09-10 01:07:07.362191 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-10 01:07:07.362201 | orchestrator | Wednesday 10 September 2025 01:03:19 +0000 (0:00:03.955) 0:00:25.346 *** 2025-09-10 01:07:07.362212 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:07:07.362230 | orchestrator | 2025-09-10 01:07:07.362241 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-10 01:07:07.362251 | orchestrator | Wednesday 10 September 2025 01:03:22 +0000 (0:00:03.428) 0:00:28.774 *** 2025-09-10 01:07:07.362262 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-10 01:07:07.362273 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-10 01:07:07.362283 | orchestrator | 2025-09-10 01:07:07.362294 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-10 01:07:07.362305 | orchestrator | Wednesday 10 September 2025 01:03:30 +0000 (0:00:07.914) 0:00:36.689 *** 2025-09-10 01:07:07.362324 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.362335 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.362346 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.362357 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.362367 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.362378 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.362389 | orchestrator | 2025-09-10 01:07:07.362399 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-10 01:07:07.362410 | orchestrator | Wednesday 10 September 2025 01:03:31 +0000 (0:00:00.744) 0:00:37.434 *** 2025-09-10 01:07:07.362421 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.362431 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.362442 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.362452 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.362463 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.362473 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.362510 | orchestrator | 2025-09-10 01:07:07.362522 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-10 01:07:07.362533 | orchestrator | Wednesday 10 September 2025 01:03:33 +0000 (0:00:02.105) 0:00:39.539 *** 2025-09-10 01:07:07.362544 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:07.362554 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:07.362565 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:07:07.362576 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:07.362586 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:07:07.362597 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:07:07.362608 | orchestrator | 2025-09-10 01:07:07.362618 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-10 01:07:07.362629 | orchestrator | Wednesday 10 September 2025 01:03:34 +0000 (0:00:01.135) 0:00:40.674 *** 2025-09-10 01:07:07.362640 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.362651 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.362661 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.362672 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.362682 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.362693 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.362703 | orchestrator | 2025-09-10 01:07:07.362714 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-10 01:07:07.362731 | orchestrator | Wednesday 10 September 2025 01:03:36 +0000 (0:00:02.032) 0:00:42.707 *** 2025-09-10 01:07:07.362745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.362769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.362826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.362849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.362867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.362879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.362899 | orchestrator | 2025-09-10 01:07:07.362910 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-10 01:07:07.362921 | orchestrator | Wednesday 10 September 2025 01:03:41 +0000 (0:00:04.227) 0:00:46.935 *** 2025-09-10 01:07:07.362932 | orchestrator | [WARNING]: Skipped 2025-09-10 01:07:07.362943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-10 01:07:07.362954 | orchestrator | due to this access issue: 2025-09-10 01:07:07.362965 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-10 01:07:07.362975 | orchestrator | a directory 2025-09-10 01:07:07.362986 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:07:07.362997 | orchestrator | 2025-09-10 01:07:07.363007 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-10 01:07:07.363018 | orchestrator | Wednesday 10 September 2025 01:03:42 +0000 (0:00:01.355) 0:00:48.290 *** 2025-09-10 01:07:07.363029 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:07:07.363041 | orchestrator | 2025-09-10 01:07:07.363052 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-10 01:07:07.363062 | orchestrator | Wednesday 10 September 2025 01:03:43 +0000 (0:00:01.304) 0:00:49.594 *** 2025-09-10 01:07:07.363080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.363093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.363109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.363128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.363139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.363158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.363169 | orchestrator | 2025-09-10 01:07:07.363180 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-10 01:07:07.363191 | orchestrator | Wednesday 10 September 2025 01:03:46 +0000 (0:00:03.210) 0:00:52.806 *** 2025-09-10 01:07:07.363203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.363214 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.363230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.363248 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.363260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.363272 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.363283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.363294 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.363314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.363326 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.363337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.363361 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.363372 | orchestrator | 2025-09-10 01:07:07.363382 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-10 01:07:07.363393 | orchestrator | Wednesday 10 September 2025 01:03:49 +0000 (0:00:02.648) 0:00:55.454 *** 2025-09-10 01:07:07.363421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.363432 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.363444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.363455 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.363471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.363483 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.363547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.363567 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.363579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.363589 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.363717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.363741 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.363751 | orchestrator | 2025-09-10 01:07:07.363761 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-10 01:07:07.363771 | orchestrator | Wednesday 10 September 2025 01:03:52 +0000 (0:00:02.926) 0:00:58.381 *** 2025-09-10 01:07:07.363781 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.363790 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.363800 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.363809 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.363819 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.363828 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.363838 | orchestrator | 2025-09-10 01:07:07.363847 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-10 01:07:07.363857 | orchestrator | Wednesday 10 September 2025 01:03:54 +0000 (0:00:02.134) 0:01:00.516 *** 2025-09-10 01:07:07.363866 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.363876 | orchestrator | 2025-09-10 01:07:07.363885 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-10 01:07:07.363895 | orchestrator | Wednesday 10 September 2025 01:03:54 +0000 (0:00:00.136) 0:01:00.652 *** 2025-09-10 01:07:07.363904 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.363914 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.363923 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.363933 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.363942 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.363951 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.363961 | orchestrator | 2025-09-10 01:07:07.363970 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-10 01:07:07.363980 | orchestrator | Wednesday 10 September 2025 01:03:55 +0000 (0:00:00.753) 0:01:01.406 *** 2025-09-10 01:07:07.364000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.364018 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.364028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.364039 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.364053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.364063 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.364073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364083 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.364093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364109 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.364127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364138 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.364147 | orchestrator | 2025-09-10 01:07:07.364157 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-10 01:07:07.364166 | orchestrator | Wednesday 10 September 2025 01:03:58 +0000 (0:00:02.779) 0:01:04.186 *** 2025-09-10 01:07:07.364181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.364218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.364249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.364259 | orchestrator | 2025-09-10 01:07:07.364269 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-10 01:07:07.364279 | orchestrator | Wednesday 10 September 2025 01:04:02 +0000 (0:00:03.777) 0:01:07.963 *** 2025-09-10 01:07:07.364289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364322 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.364332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.364357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.364366 | orchestrator | 2025-09-10 01:07:07.364376 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-10 01:07:07.364386 | orchestrator | Wednesday 10 September 2025 01:04:08 +0000 (0:00:06.287) 0:01:14.251 *** 2025-09-10 01:07:07.364396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.364411 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.364427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.364437 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.364447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.364457 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.364472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364482 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.364508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364524 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.364534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364544 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.364553 | orchestrator | 2025-09-10 01:07:07.364563 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-10 01:07:07.364573 | orchestrator | Wednesday 10 September 2025 01:04:10 +0000 (0:00:02.370) 0:01:16.622 *** 2025-09-10 01:07:07.364582 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.364592 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.364602 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:07.364612 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:07.364621 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:07.364631 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.364641 | orchestrator | 2025-09-10 01:07:07.364656 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-10 01:07:07.364665 | orchestrator | Wednesday 10 September 2025 01:04:14 +0000 (0:00:03.843) 0:01:20.466 *** 2025-09-10 01:07:07.364675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364685 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.364700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364710 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.364720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.364735 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.364745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.364782 | orchestrator | 2025-09-10 01:07:07.364792 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-10 01:07:07.364802 | orchestrator | Wednesday 10 September 2025 01:04:18 +0000 (0:00:04.309) 0:01:24.775 *** 2025-09-10 01:07:07.364811 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.364821 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.364830 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.364840 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.364849 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.364859 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.364868 | orchestrator | 2025-09-10 01:07:07.364886 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-10 01:07:07.364895 | orchestrator | Wednesday 10 September 2025 01:04:21 +0000 (0:00:02.611) 0:01:27.386 *** 2025-09-10 01:07:07.364905 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.364920 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.364930 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.364939 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.364949 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.364958 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.364967 | orchestrator | 2025-09-10 01:07:07.364977 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-10 01:07:07.364986 | orchestrator | Wednesday 10 September 2025 01:04:24 +0000 (0:00:03.117) 0:01:30.504 *** 2025-09-10 01:07:07.364996 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365005 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365014 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365024 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365033 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365043 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365052 | orchestrator | 2025-09-10 01:07:07.365061 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-10 01:07:07.365071 | orchestrator | Wednesday 10 September 2025 01:04:28 +0000 (0:00:03.476) 0:01:33.980 *** 2025-09-10 01:07:07.365080 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365090 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365099 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365108 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365118 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365127 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365136 | orchestrator | 2025-09-10 01:07:07.365146 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-10 01:07:07.365155 | orchestrator | Wednesday 10 September 2025 01:04:31 +0000 (0:00:03.114) 0:01:37.094 *** 2025-09-10 01:07:07.365165 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365174 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365184 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365193 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365203 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365212 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365221 | orchestrator | 2025-09-10 01:07:07.365230 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-10 01:07:07.365240 | orchestrator | Wednesday 10 September 2025 01:04:33 +0000 (0:00:02.510) 0:01:39.604 *** 2025-09-10 01:07:07.365249 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365259 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365268 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365277 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365287 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365296 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365306 | orchestrator | 2025-09-10 01:07:07.365315 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-10 01:07:07.365325 | orchestrator | Wednesday 10 September 2025 01:04:35 +0000 (0:00:01.898) 0:01:41.503 *** 2025-09-10 01:07:07.365334 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-10 01:07:07.365344 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365353 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-10 01:07:07.365363 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-10 01:07:07.365373 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365382 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365392 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-10 01:07:07.365407 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365417 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-10 01:07:07.365432 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365442 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-10 01:07:07.365451 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365461 | orchestrator | 2025-09-10 01:07:07.365470 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-10 01:07:07.365480 | orchestrator | Wednesday 10 September 2025 01:04:37 +0000 (0:00:02.067) 0:01:43.571 *** 2025-09-10 01:07:07.365540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.365551 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.365576 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.365593 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.365614 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.365637 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.365657 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365665 | orchestrator | 2025-09-10 01:07:07.365673 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-10 01:07:07.365681 | orchestrator | Wednesday 10 September 2025 01:04:40 +0000 (0:00:02.595) 0:01:46.167 *** 2025-09-10 01:07:07.365689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.365697 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.365713 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.365740 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.365757 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.365776 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.365792 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365800 | orchestrator | 2025-09-10 01:07:07.365808 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-10 01:07:07.365816 | orchestrator | Wednesday 10 September 2025 01:04:42 +0000 (0:00:02.105) 0:01:48.273 *** 2025-09-10 01:07:07.365824 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365832 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365839 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365847 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.365855 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.365868 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.365876 | orchestrator | 2025-09-10 01:07:07.365883 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-10 01:07:07.365891 | orchestrator | Wednesday 10 September 2025 01:04:44 +0000 (0:00:02.069) 0:01:50.342 *** 2025-09-10 01:07:07.365899 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365907 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365915 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365922 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:07:07.365930 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:07:07.365938 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:07:07.365945 | orchestrator | 2025-09-10 01:07:07.365953 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-10 01:07:07.365961 | orchestrator | Wednesday 10 September 2025 01:04:48 +0000 (0:00:03.801) 0:01:54.143 *** 2025-09-10 01:07:07.365969 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.365977 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.365984 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.365992 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366000 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366008 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366054 | orchestrator | 2025-09-10 01:07:07.366064 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-10 01:07:07.366072 | orchestrator | Wednesday 10 September 2025 01:04:51 +0000 (0:00:03.388) 0:01:57.532 *** 2025-09-10 01:07:07.366081 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366094 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366102 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366109 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366117 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366125 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366133 | orchestrator | 2025-09-10 01:07:07.366141 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-10 01:07:07.366149 | orchestrator | Wednesday 10 September 2025 01:04:55 +0000 (0:00:03.973) 0:02:01.506 *** 2025-09-10 01:07:07.366157 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366164 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366172 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366180 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366187 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366195 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366203 | orchestrator | 2025-09-10 01:07:07.366211 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-10 01:07:07.366231 | orchestrator | Wednesday 10 September 2025 01:04:58 +0000 (0:00:02.842) 0:02:04.349 *** 2025-09-10 01:07:07.366239 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366247 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366255 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366262 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366270 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366278 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366285 | orchestrator | 2025-09-10 01:07:07.366293 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-10 01:07:07.366301 | orchestrator | Wednesday 10 September 2025 01:05:00 +0000 (0:00:02.296) 0:02:06.645 *** 2025-09-10 01:07:07.366309 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366316 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366324 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366332 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366340 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366347 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366355 | orchestrator | 2025-09-10 01:07:07.366363 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-10 01:07:07.366382 | orchestrator | Wednesday 10 September 2025 01:05:03 +0000 (0:00:02.472) 0:02:09.118 *** 2025-09-10 01:07:07.366390 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366398 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366405 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366413 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366421 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366428 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366436 | orchestrator | 2025-09-10 01:07:07.366444 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-10 01:07:07.366452 | orchestrator | Wednesday 10 September 2025 01:05:05 +0000 (0:00:02.578) 0:02:11.697 *** 2025-09-10 01:07:07.366460 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366468 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366475 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366495 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366504 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366511 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366519 | orchestrator | 2025-09-10 01:07:07.366527 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-10 01:07:07.366535 | orchestrator | Wednesday 10 September 2025 01:05:07 +0000 (0:00:02.069) 0:02:13.767 *** 2025-09-10 01:07:07.366543 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-10 01:07:07.366551 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366559 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-10 01:07:07.366567 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366575 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-10 01:07:07.366583 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366591 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-10 01:07:07.366598 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366606 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-10 01:07:07.366614 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366622 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-10 01:07:07.366630 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366638 | orchestrator | 2025-09-10 01:07:07.366646 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-10 01:07:07.366654 | orchestrator | Wednesday 10 September 2025 01:05:11 +0000 (0:00:03.501) 0:02:17.269 *** 2025-09-10 01:07:07.366666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.366675 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.366696 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.366717 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-10 01:07:07.366733 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.366749 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-10 01:07:07.366776 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366784 | orchestrator | 2025-09-10 01:07:07.366792 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-10 01:07:07.366799 | orchestrator | Wednesday 10 September 2025 01:05:13 +0000 (0:00:02.215) 0:02:19.484 *** 2025-09-10 01:07:07.366808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.366820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.366828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-10 01:07:07.366836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.366852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.366868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-10 01:07:07.366877 | orchestrator | 2025-09-10 01:07:07.366885 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-10 01:07:07.366893 | orchestrator | Wednesday 10 September 2025 01:05:17 +0000 (0:00:04.261) 0:02:23.745 *** 2025-09-10 01:07:07.366900 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:07.366912 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:07.366920 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:07.366928 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:07:07.366935 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:07:07.366943 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:07:07.366951 | orchestrator | 2025-09-10 01:07:07.366959 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-10 01:07:07.366966 | orchestrator | Wednesday 10 September 2025 01:05:18 +0000 (0:00:00.741) 0:02:24.487 *** 2025-09-10 01:07:07.366974 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:07.366982 | orchestrator | 2025-09-10 01:07:07.366990 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-10 01:07:07.366997 | orchestrator | Wednesday 10 September 2025 01:05:20 +0000 (0:00:02.295) 0:02:26.783 *** 2025-09-10 01:07:07.367005 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:07.367013 | orchestrator | 2025-09-10 01:07:07.367020 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-10 01:07:07.367028 | orchestrator | Wednesday 10 September 2025 01:05:23 +0000 (0:00:02.370) 0:02:29.153 *** 2025-09-10 01:07:07.367036 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:07.367044 | orchestrator | 2025-09-10 01:07:07.367051 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-10 01:07:07.367059 | orchestrator | Wednesday 10 September 2025 01:06:08 +0000 (0:00:45.049) 0:03:14.203 *** 2025-09-10 01:07:07.367067 | orchestrator | 2025-09-10 01:07:07.367075 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-10 01:07:07.367082 | orchestrator | Wednesday 10 September 2025 01:06:08 +0000 (0:00:00.147) 0:03:14.350 *** 2025-09-10 01:07:07.367090 | orchestrator | 2025-09-10 01:07:07.367098 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-10 01:07:07.367106 | orchestrator | Wednesday 10 September 2025 01:06:08 +0000 (0:00:00.199) 0:03:14.550 *** 2025-09-10 01:07:07.367114 | orchestrator | 2025-09-10 01:07:07.367122 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-10 01:07:07.367129 | orchestrator | Wednesday 10 September 2025 01:06:08 +0000 (0:00:00.127) 0:03:14.678 *** 2025-09-10 01:07:07.367137 | orchestrator | 2025-09-10 01:07:07.367150 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-10 01:07:07.367157 | orchestrator | Wednesday 10 September 2025 01:06:08 +0000 (0:00:00.140) 0:03:14.819 *** 2025-09-10 01:07:07.367165 | orchestrator | 2025-09-10 01:07:07.367173 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-10 01:07:07.367181 | orchestrator | Wednesday 10 September 2025 01:06:09 +0000 (0:00:00.149) 0:03:14.968 *** 2025-09-10 01:07:07.367188 | orchestrator | 2025-09-10 01:07:07.367196 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-10 01:07:07.367204 | orchestrator | Wednesday 10 September 2025 01:06:09 +0000 (0:00:00.131) 0:03:15.099 *** 2025-09-10 01:07:07.367212 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:07.367219 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:07.367227 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:07.367235 | orchestrator | 2025-09-10 01:07:07.367243 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-10 01:07:07.367250 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:27.096) 0:03:42.196 *** 2025-09-10 01:07:07.367258 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:07:07.367266 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:07:07.367274 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:07:07.367281 | orchestrator | 2025-09-10 01:07:07.367289 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:07:07.367297 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 01:07:07.367310 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-10 01:07:07.367319 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-10 01:07:07.367327 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 01:07:07.367334 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 01:07:07.367342 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-10 01:07:07.367350 | orchestrator | 2025-09-10 01:07:07.367357 | orchestrator | 2025-09-10 01:07:07.367365 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:07:07.367373 | orchestrator | Wednesday 10 September 2025 01:07:05 +0000 (0:00:28.944) 0:04:11.141 *** 2025-09-10 01:07:07.367381 | orchestrator | =============================================================================== 2025-09-10 01:07:07.367388 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.05s 2025-09-10 01:07:07.367396 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.94s 2025-09-10 01:07:07.367404 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.10s 2025-09-10 01:07:07.367411 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.91s 2025-09-10 01:07:07.367419 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.41s 2025-09-10 01:07:07.367427 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.29s 2025-09-10 01:07:07.367438 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.31s 2025-09-10 01:07:07.367446 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.26s 2025-09-10 01:07:07.367454 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.23s 2025-09-10 01:07:07.367461 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.97s 2025-09-10 01:07:07.367474 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.96s 2025-09-10 01:07:07.367482 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.88s 2025-09-10 01:07:07.367504 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.84s 2025-09-10 01:07:07.367512 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.80s 2025-09-10 01:07:07.367519 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.78s 2025-09-10 01:07:07.367527 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.50s 2025-09-10 01:07:07.367535 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.48s 2025-09-10 01:07:07.367542 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.43s 2025-09-10 01:07:07.367550 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.39s 2025-09-10 01:07:07.367558 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.34s 2025-09-10 01:07:07.367566 | orchestrator | 2025-09-10 01:07:07 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:07:07.367573 | orchestrator | 2025-09-10 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:10.407261 | orchestrator | 2025-09-10 01:07:10 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:10.407632 | orchestrator | 2025-09-10 01:07:10 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:10.408258 | orchestrator | 2025-09-10 01:07:10 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:10.408996 | orchestrator | 2025-09-10 01:07:10 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state STARTED 2025-09-10 01:07:10.409340 | orchestrator | 2025-09-10 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:13.450392 | orchestrator | 2025-09-10 01:07:13 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:13.452650 | orchestrator | 2025-09-10 01:07:13 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:13.455822 | orchestrator | 2025-09-10 01:07:13 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:13.457758 | orchestrator | 2025-09-10 01:07:13 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:13.461017 | orchestrator | 2025-09-10 01:07:13 | INFO  | Task 0964edfb-d947-491e-8013-959a26b30137 is in state SUCCESS 2025-09-10 01:07:13.463887 | orchestrator | 2025-09-10 01:07:13.463940 | orchestrator | 2025-09-10 01:07:13.463954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:07:13.463966 | orchestrator | 2025-09-10 01:07:13.463981 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:07:13.464000 | orchestrator | Wednesday 10 September 2025 01:03:57 +0000 (0:00:00.915) 0:00:00.915 *** 2025-09-10 01:07:13.464019 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:13.464040 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:13.464111 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:13.464142 | orchestrator | 2025-09-10 01:07:13.464153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:07:13.464168 | orchestrator | Wednesday 10 September 2025 01:03:58 +0000 (0:00:00.624) 0:00:01.540 *** 2025-09-10 01:07:13.464189 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-10 01:07:13.464209 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-10 01:07:13.464229 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-10 01:07:13.464248 | orchestrator | 2025-09-10 01:07:13.464268 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-10 01:07:13.464340 | orchestrator | 2025-09-10 01:07:13.464364 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-10 01:07:13.464421 | orchestrator | Wednesday 10 September 2025 01:03:58 +0000 (0:00:00.565) 0:00:02.105 *** 2025-09-10 01:07:13.464534 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:07:13.464558 | orchestrator | 2025-09-10 01:07:13.464579 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-10 01:07:13.464618 | orchestrator | Wednesday 10 September 2025 01:03:59 +0000 (0:00:01.250) 0:00:03.356 *** 2025-09-10 01:07:13.464782 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-10 01:07:13.464800 | orchestrator | 2025-09-10 01:07:13.464811 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-10 01:07:13.464822 | orchestrator | Wednesday 10 September 2025 01:04:03 +0000 (0:00:03.680) 0:00:07.036 *** 2025-09-10 01:07:13.464833 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-10 01:07:13.464859 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-10 01:07:13.464871 | orchestrator | 2025-09-10 01:07:13.464882 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-10 01:07:13.464893 | orchestrator | Wednesday 10 September 2025 01:04:10 +0000 (0:00:06.839) 0:00:13.876 *** 2025-09-10 01:07:13.464904 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:07:13.464915 | orchestrator | 2025-09-10 01:07:13.464925 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-10 01:07:13.464936 | orchestrator | Wednesday 10 September 2025 01:04:13 +0000 (0:00:03.308) 0:00:17.184 *** 2025-09-10 01:07:13.464947 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:07:13.464957 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-10 01:07:13.464968 | orchestrator | 2025-09-10 01:07:13.464979 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-10 01:07:13.464989 | orchestrator | Wednesday 10 September 2025 01:04:17 +0000 (0:00:04.026) 0:00:21.211 *** 2025-09-10 01:07:13.465000 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:07:13.465011 | orchestrator | 2025-09-10 01:07:13.465021 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-10 01:07:13.465032 | orchestrator | Wednesday 10 September 2025 01:04:20 +0000 (0:00:03.181) 0:00:24.393 *** 2025-09-10 01:07:13.465042 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-10 01:07:13.465053 | orchestrator | 2025-09-10 01:07:13.465063 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-10 01:07:13.465074 | orchestrator | Wednesday 10 September 2025 01:04:25 +0000 (0:00:04.165) 0:00:28.559 *** 2025-09-10 01:07:13.465089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.465124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.465151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.465169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465553 | orchestrator | 2025-09-10 01:07:13.465595 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-10 01:07:13.465637 | orchestrator | Wednesday 10 September 2025 01:04:29 +0000 (0:00:04.014) 0:00:32.574 *** 2025-09-10 01:07:13.465686 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.465697 | orchestrator | 2025-09-10 01:07:13.465708 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-10 01:07:13.465719 | orchestrator | Wednesday 10 September 2025 01:04:29 +0000 (0:00:00.314) 0:00:32.888 *** 2025-09-10 01:07:13.465729 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.465740 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:13.465751 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:13.465761 | orchestrator | 2025-09-10 01:07:13.465772 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-10 01:07:13.465782 | orchestrator | Wednesday 10 September 2025 01:04:30 +0000 (0:00:00.777) 0:00:33.666 *** 2025-09-10 01:07:13.465793 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:07:13.465804 | orchestrator | 2025-09-10 01:07:13.465815 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-10 01:07:13.465835 | orchestrator | Wednesday 10 September 2025 01:04:31 +0000 (0:00:01.103) 0:00:34.770 *** 2025-09-10 01:07:13.465846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.465866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.465878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.465896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.465995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.466176 | orchestrator | 2025-09-10 01:07:13.466188 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-10 01:07:13.466199 | orchestrator | Wednesday 10 September 2025 01:04:37 +0000 (0:00:06.351) 0:00:41.122 *** 2025-09-10 01:07:13.466210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.466221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.466239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.466251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.466267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.466278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.466296 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.466307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.466319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.467029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467196 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:13.467218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.467238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.467281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467355 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:13.467367 | orchestrator | 2025-09-10 01:07:13.467379 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-10 01:07:13.467391 | orchestrator | Wednesday 10 September 2025 01:04:38 +0000 (0:00:01.129) 0:00:42.251 *** 2025-09-10 01:07:13.467402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.467413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.467433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467522 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:13.467534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.467546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.467565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467631 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.467645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.467658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.467672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.467743 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:13.467756 | orchestrator | 2025-09-10 01:07:13.467768 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-10 01:07:13.467782 | orchestrator | Wednesday 10 September 2025 01:04:40 +0000 (0:00:01.724) 0:00:43.976 *** 2025-09-10 01:07:13.467795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.467809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.467831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.467844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.467996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468069 | orchestrator | 2025-09-10 01:07:13.468080 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-10 01:07:13.468091 | orchestrator | Wednesday 10 September 2025 01:04:47 +0000 (0:00:07.134) 0:00:51.110 *** 2025-09-10 01:07:13.468107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.468119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.468131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.468149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468359 | orchestrator | 2025-09-10 01:07:13.468370 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-10 01:07:13.468381 | orchestrator | Wednesday 10 September 2025 01:05:08 +0000 (0:00:20.401) 0:01:11.512 *** 2025-09-10 01:07:13.468392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-10 01:07:13.468403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-10 01:07:13.468414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-10 01:07:13.468425 | orchestrator | 2025-09-10 01:07:13.468436 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-10 01:07:13.468446 | orchestrator | Wednesday 10 September 2025 01:05:14 +0000 (0:00:06.631) 0:01:18.144 *** 2025-09-10 01:07:13.468457 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-10 01:07:13.468468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-10 01:07:13.468499 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-10 01:07:13.468511 | orchestrator | 2025-09-10 01:07:13.468522 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-10 01:07:13.468533 | orchestrator | Wednesday 10 September 2025 01:05:19 +0000 (0:00:04.545) 0:01:22.690 *** 2025-09-10 01:07:13.468544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.468556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.468581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.468593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468796 | orchestrator | 2025-09-10 01:07:13.468807 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-10 01:07:13.468818 | orchestrator | Wednesday 10 September 2025 01:05:21 +0000 (0:00:02.701) 0:01:25.392 *** 2025-09-10 01:07:13.468834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.468846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.468864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.468881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.468952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.468992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469154 | orchestrator | 2025-09-10 01:07:13.469165 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-10 01:07:13.469176 | orchestrator | Wednesday 10 September 2025 01:05:24 +0000 (0:00:02.647) 0:01:28.039 *** 2025-09-10 01:07:13.469187 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.469198 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:13.469209 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:13.469220 | orchestrator | 2025-09-10 01:07:13.469231 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-10 01:07:13.469242 | orchestrator | Wednesday 10 September 2025 01:05:24 +0000 (0:00:00.268) 0:01:28.308 *** 2025-09-10 01:07:13.469258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.469279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.469291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469344 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:13.469360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.469379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.469390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469441 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.469457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-10 01:07:13.469475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-10 01:07:13.469518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:07:13.469571 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:13.469582 | orchestrator | 2025-09-10 01:07:13.469593 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-10 01:07:13.469604 | orchestrator | Wednesday 10 September 2025 01:05:27 +0000 (0:00:02.224) 0:01:30.533 *** 2025-09-10 01:07:13.469620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.469639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.469650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-10 01:07:13.469661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:07:13.469872 | orchestrator | 2025-09-10 01:07:13.469883 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-10 01:07:13.469894 | orchestrator | Wednesday 10 September 2025 01:05:32 +0000 (0:00:05.099) 0:01:35.632 *** 2025-09-10 01:07:13.469905 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:13.469916 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:13.469927 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:13.469938 | orchestrator | 2025-09-10 01:07:13.469948 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-10 01:07:13.469959 | orchestrator | Wednesday 10 September 2025 01:05:32 +0000 (0:00:00.251) 0:01:35.884 *** 2025-09-10 01:07:13.469978 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-10 01:07:13.469989 | orchestrator | 2025-09-10 01:07:13.469999 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-10 01:07:13.470010 | orchestrator | Wednesday 10 September 2025 01:05:34 +0000 (0:00:02.391) 0:01:38.275 *** 2025-09-10 01:07:13.470101 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 01:07:13.470113 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-10 01:07:13.470124 | orchestrator | 2025-09-10 01:07:13.470135 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-10 01:07:13.470146 | orchestrator | Wednesday 10 September 2025 01:05:37 +0000 (0:00:02.347) 0:01:40.622 *** 2025-09-10 01:07:13.470156 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470167 | orchestrator | 2025-09-10 01:07:13.470178 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-10 01:07:13.470189 | orchestrator | Wednesday 10 September 2025 01:05:54 +0000 (0:00:17.761) 0:01:58.384 *** 2025-09-10 01:07:13.470200 | orchestrator | 2025-09-10 01:07:13.470211 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-10 01:07:13.470221 | orchestrator | Wednesday 10 September 2025 01:05:55 +0000 (0:00:00.630) 0:01:59.014 *** 2025-09-10 01:07:13.470232 | orchestrator | 2025-09-10 01:07:13.470243 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-10 01:07:13.470253 | orchestrator | Wednesday 10 September 2025 01:05:55 +0000 (0:00:00.070) 0:01:59.085 *** 2025-09-10 01:07:13.470264 | orchestrator | 2025-09-10 01:07:13.470280 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-10 01:07:13.470291 | orchestrator | Wednesday 10 September 2025 01:05:55 +0000 (0:00:00.099) 0:01:59.185 *** 2025-09-10 01:07:13.470302 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470313 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:13.470323 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:13.470334 | orchestrator | 2025-09-10 01:07:13.470345 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-10 01:07:13.470356 | orchestrator | Wednesday 10 September 2025 01:06:06 +0000 (0:00:10.589) 0:02:09.774 *** 2025-09-10 01:07:13.470366 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470377 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:13.470388 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:13.470398 | orchestrator | 2025-09-10 01:07:13.470409 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-10 01:07:13.470420 | orchestrator | Wednesday 10 September 2025 01:06:18 +0000 (0:00:12.130) 0:02:21.905 *** 2025-09-10 01:07:13.470431 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470441 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:13.470452 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:13.470463 | orchestrator | 2025-09-10 01:07:13.470473 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-10 01:07:13.470504 | orchestrator | Wednesday 10 September 2025 01:06:26 +0000 (0:00:08.152) 0:02:30.057 *** 2025-09-10 01:07:13.470516 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470526 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:13.470537 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:13.470547 | orchestrator | 2025-09-10 01:07:13.470558 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-10 01:07:13.470569 | orchestrator | Wednesday 10 September 2025 01:06:40 +0000 (0:00:13.466) 0:02:43.524 *** 2025-09-10 01:07:13.470579 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470590 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:13.470600 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:13.470611 | orchestrator | 2025-09-10 01:07:13.470622 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-10 01:07:13.470632 | orchestrator | Wednesday 10 September 2025 01:06:52 +0000 (0:00:12.494) 0:02:56.019 *** 2025-09-10 01:07:13.470651 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470662 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:13.470673 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:13.470683 | orchestrator | 2025-09-10 01:07:13.470694 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-10 01:07:13.470704 | orchestrator | Wednesday 10 September 2025 01:07:03 +0000 (0:00:11.135) 0:03:07.155 *** 2025-09-10 01:07:13.470715 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:13.470726 | orchestrator | 2025-09-10 01:07:13.470737 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:07:13.470748 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-10 01:07:13.470760 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 01:07:13.470770 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 01:07:13.470781 | orchestrator | 2025-09-10 01:07:13.470792 | orchestrator | 2025-09-10 01:07:13.470810 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:07:13.470821 | orchestrator | Wednesday 10 September 2025 01:07:11 +0000 (0:00:07.727) 0:03:14.882 *** 2025-09-10 01:07:13.470832 | orchestrator | =============================================================================== 2025-09-10 01:07:13.470843 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.40s 2025-09-10 01:07:13.470853 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.76s 2025-09-10 01:07:13.470864 | orchestrator | designate : Restart designate-producer container ----------------------- 13.47s 2025-09-10 01:07:13.470875 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.49s 2025-09-10 01:07:13.470886 | orchestrator | designate : Restart designate-api container ---------------------------- 12.13s 2025-09-10 01:07:13.470896 | orchestrator | designate : Restart designate-worker container ------------------------- 11.14s 2025-09-10 01:07:13.470907 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.59s 2025-09-10 01:07:13.470918 | orchestrator | designate : Restart designate-central container ------------------------- 8.15s 2025-09-10 01:07:13.470928 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.73s 2025-09-10 01:07:13.470939 | orchestrator | designate : Copying over config.json files for services ----------------- 7.13s 2025-09-10 01:07:13.470950 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.84s 2025-09-10 01:07:13.470960 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.63s 2025-09-10 01:07:13.470971 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.35s 2025-09-10 01:07:13.470982 | orchestrator | designate : Check designate containers ---------------------------------- 5.10s 2025-09-10 01:07:13.470993 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.55s 2025-09-10 01:07:13.471003 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.17s 2025-09-10 01:07:13.471014 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.03s 2025-09-10 01:07:13.471025 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.01s 2025-09-10 01:07:13.471040 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.68s 2025-09-10 01:07:13.471052 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.31s 2025-09-10 01:07:13.471063 | orchestrator | 2025-09-10 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:16.515680 | orchestrator | 2025-09-10 01:07:16 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:16.517447 | orchestrator | 2025-09-10 01:07:16 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:16.519304 | orchestrator | 2025-09-10 01:07:16 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:16.521262 | orchestrator | 2025-09-10 01:07:16 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:16.521602 | orchestrator | 2025-09-10 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:19.565790 | orchestrator | 2025-09-10 01:07:19 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:19.566280 | orchestrator | 2025-09-10 01:07:19 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:19.568276 | orchestrator | 2025-09-10 01:07:19 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:19.570698 | orchestrator | 2025-09-10 01:07:19 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:19.570974 | orchestrator | 2025-09-10 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:22.616962 | orchestrator | 2025-09-10 01:07:22 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:22.618731 | orchestrator | 2025-09-10 01:07:22 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:22.620440 | orchestrator | 2025-09-10 01:07:22 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:22.622375 | orchestrator | 2025-09-10 01:07:22 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:22.622420 | orchestrator | 2025-09-10 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:25.672459 | orchestrator | 2025-09-10 01:07:25 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:25.674680 | orchestrator | 2025-09-10 01:07:25 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:25.677242 | orchestrator | 2025-09-10 01:07:25 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:25.678251 | orchestrator | 2025-09-10 01:07:25 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:25.678397 | orchestrator | 2025-09-10 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:28.722683 | orchestrator | 2025-09-10 01:07:28 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:28.722990 | orchestrator | 2025-09-10 01:07:28 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:28.723678 | orchestrator | 2025-09-10 01:07:28 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:28.725014 | orchestrator | 2025-09-10 01:07:28 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:28.725037 | orchestrator | 2025-09-10 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:31.757182 | orchestrator | 2025-09-10 01:07:31 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:31.759138 | orchestrator | 2025-09-10 01:07:31 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:31.760118 | orchestrator | 2025-09-10 01:07:31 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:31.761171 | orchestrator | 2025-09-10 01:07:31 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:31.761198 | orchestrator | 2025-09-10 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:34.797256 | orchestrator | 2025-09-10 01:07:34 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:34.797387 | orchestrator | 2025-09-10 01:07:34 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:34.799231 | orchestrator | 2025-09-10 01:07:34 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:34.800052 | orchestrator | 2025-09-10 01:07:34 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:34.800073 | orchestrator | 2025-09-10 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:37.860762 | orchestrator | 2025-09-10 01:07:37 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:37.860867 | orchestrator | 2025-09-10 01:07:37 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:37.860881 | orchestrator | 2025-09-10 01:07:37 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:37.861946 | orchestrator | 2025-09-10 01:07:37 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:37.862129 | orchestrator | 2025-09-10 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:40.907722 | orchestrator | 2025-09-10 01:07:40 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:40.909157 | orchestrator | 2025-09-10 01:07:40 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:40.912750 | orchestrator | 2025-09-10 01:07:40 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:40.914509 | orchestrator | 2025-09-10 01:07:40 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:40.914534 | orchestrator | 2025-09-10 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:43.962790 | orchestrator | 2025-09-10 01:07:43 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:43.964471 | orchestrator | 2025-09-10 01:07:43 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:43.966195 | orchestrator | 2025-09-10 01:07:43 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:43.967109 | orchestrator | 2025-09-10 01:07:43 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:43.967164 | orchestrator | 2025-09-10 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:47.012897 | orchestrator | 2025-09-10 01:07:47 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:47.014975 | orchestrator | 2025-09-10 01:07:47 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:47.018590 | orchestrator | 2025-09-10 01:07:47 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:47.022132 | orchestrator | 2025-09-10 01:07:47 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state STARTED 2025-09-10 01:07:47.022522 | orchestrator | 2025-09-10 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:50.077670 | orchestrator | 2025-09-10 01:07:50 | INFO  | Task f589eda0-df10-4fa3-a519-e78f567baafc is in state STARTED 2025-09-10 01:07:50.077768 | orchestrator | 2025-09-10 01:07:50 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:50.078318 | orchestrator | 2025-09-10 01:07:50 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:50.079243 | orchestrator | 2025-09-10 01:07:50 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:50.080561 | orchestrator | 2025-09-10 01:07:50 | INFO  | Task 47736d54-0501-4764-bdb4-8f9929837f38 is in state SUCCESS 2025-09-10 01:07:50.082708 | orchestrator | 2025-09-10 01:07:50.082744 | orchestrator | 2025-09-10 01:07:50.082764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:07:50.082785 | orchestrator | 2025-09-10 01:07:50.082804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:07:50.082822 | orchestrator | Wednesday 10 September 2025 01:06:34 +0000 (0:00:00.380) 0:00:00.380 *** 2025-09-10 01:07:50.082841 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:07:50.082861 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:07:50.082880 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:07:50.082898 | orchestrator | 2025-09-10 01:07:50.082916 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:07:50.082934 | orchestrator | Wednesday 10 September 2025 01:06:34 +0000 (0:00:00.280) 0:00:00.660 *** 2025-09-10 01:07:50.082946 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-10 01:07:50.082958 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-10 01:07:50.082968 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-10 01:07:50.082979 | orchestrator | 2025-09-10 01:07:50.082990 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-10 01:07:50.083001 | orchestrator | 2025-09-10 01:07:50.083012 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-10 01:07:50.083023 | orchestrator | Wednesday 10 September 2025 01:06:34 +0000 (0:00:00.370) 0:00:01.030 *** 2025-09-10 01:07:50.083050 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:07:50.083062 | orchestrator | 2025-09-10 01:07:50.083073 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-10 01:07:50.083086 | orchestrator | Wednesday 10 September 2025 01:06:35 +0000 (0:00:00.517) 0:00:01.548 *** 2025-09-10 01:07:50.083097 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-10 01:07:50.083108 | orchestrator | 2025-09-10 01:07:50.083119 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-10 01:07:50.083130 | orchestrator | Wednesday 10 September 2025 01:06:38 +0000 (0:00:03.632) 0:00:05.180 *** 2025-09-10 01:07:50.083140 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-10 01:07:50.083152 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-10 01:07:50.083163 | orchestrator | 2025-09-10 01:07:50.083174 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-10 01:07:50.083185 | orchestrator | Wednesday 10 September 2025 01:06:45 +0000 (0:00:07.020) 0:00:12.201 *** 2025-09-10 01:07:50.083196 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:07:50.083207 | orchestrator | 2025-09-10 01:07:50.083218 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-10 01:07:50.083229 | orchestrator | Wednesday 10 September 2025 01:06:49 +0000 (0:00:03.579) 0:00:15.780 *** 2025-09-10 01:07:50.083240 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:07:50.083250 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-10 01:07:50.083261 | orchestrator | 2025-09-10 01:07:50.083272 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-10 01:07:50.083282 | orchestrator | Wednesday 10 September 2025 01:06:53 +0000 (0:00:04.212) 0:00:19.993 *** 2025-09-10 01:07:50.083293 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:07:50.083304 | orchestrator | 2025-09-10 01:07:50.083315 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-10 01:07:50.083340 | orchestrator | Wednesday 10 September 2025 01:06:57 +0000 (0:00:03.516) 0:00:23.510 *** 2025-09-10 01:07:50.083351 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-10 01:07:50.083362 | orchestrator | 2025-09-10 01:07:50.083372 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-10 01:07:50.083383 | orchestrator | Wednesday 10 September 2025 01:07:01 +0000 (0:00:04.448) 0:00:27.959 *** 2025-09-10 01:07:50.083394 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:50.083405 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:50.083416 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:50.083426 | orchestrator | 2025-09-10 01:07:50.083437 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-10 01:07:50.083448 | orchestrator | Wednesday 10 September 2025 01:07:02 +0000 (0:00:00.417) 0:00:28.376 *** 2025-09-10 01:07:50.083462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.083527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.083547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.083559 | orchestrator | 2025-09-10 01:07:50.083570 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-10 01:07:50.083581 | orchestrator | Wednesday 10 September 2025 01:07:03 +0000 (0:00:01.070) 0:00:29.447 *** 2025-09-10 01:07:50.083592 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:50.083603 | orchestrator | 2025-09-10 01:07:50.083621 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-10 01:07:50.083632 | orchestrator | Wednesday 10 September 2025 01:07:03 +0000 (0:00:00.125) 0:00:29.573 *** 2025-09-10 01:07:50.083643 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:50.083654 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:50.083664 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:50.083675 | orchestrator | 2025-09-10 01:07:50.083685 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-10 01:07:50.083696 | orchestrator | Wednesday 10 September 2025 01:07:03 +0000 (0:00:00.589) 0:00:30.162 *** 2025-09-10 01:07:50.083707 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:07:50.083718 | orchestrator | 2025-09-10 01:07:50.083728 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-10 01:07:50.083739 | orchestrator | Wednesday 10 September 2025 01:07:04 +0000 (0:00:00.731) 0:00:30.894 *** 2025-09-10 01:07:50.083751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.083771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.083789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.083801 | orchestrator | 2025-09-10 01:07:50.083812 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-10 01:07:50.083822 | orchestrator | Wednesday 10 September 2025 01:07:06 +0000 (0:00:01.817) 0:00:32.711 *** 2025-09-10 01:07:50.083841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.083853 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:50.083864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.083876 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:50.083893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.083905 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:50.083915 | orchestrator | 2025-09-10 01:07:50.083926 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-10 01:07:50.083937 | orchestrator | Wednesday 10 September 2025 01:07:07 +0000 (0:00:00.965) 0:00:33.677 *** 2025-09-10 01:07:50.083953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.083971 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:50.083983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.083994 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:50.084005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.084016 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:50.084027 | orchestrator | 2025-09-10 01:07:50.084037 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-10 01:07:50.084048 | orchestrator | Wednesday 10 September 2025 01:07:08 +0000 (0:00:00.870) 0:00:34.547 *** 2025-09-10 01:07:50.084064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084111 | orchestrator | 2025-09-10 01:07:50.084122 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-10 01:07:50.084133 | orchestrator | Wednesday 10 September 2025 01:07:09 +0000 (0:00:01.533) 0:00:36.080 *** 2025-09-10 01:07:50.084144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084207 | orchestrator | 2025-09-10 01:07:50.084218 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-10 01:07:50.084229 | orchestrator | Wednesday 10 September 2025 01:07:12 +0000 (0:00:02.788) 0:00:38.868 *** 2025-09-10 01:07:50.084240 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-10 01:07:50.084251 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-10 01:07:50.084267 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-10 01:07:50.084278 | orchestrator | 2025-09-10 01:07:50.084289 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-10 01:07:50.084300 | orchestrator | Wednesday 10 September 2025 01:07:14 +0000 (0:00:01.506) 0:00:40.374 *** 2025-09-10 01:07:50.084311 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:50.084322 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:50.084333 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:50.084344 | orchestrator | 2025-09-10 01:07:50.084354 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-10 01:07:50.084365 | orchestrator | Wednesday 10 September 2025 01:07:15 +0000 (0:00:01.390) 0:00:41.765 *** 2025-09-10 01:07:50.084376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.084388 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:07:50.084399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.084410 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:07:50.084428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-10 01:07:50.084454 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:07:50.084465 | orchestrator | 2025-09-10 01:07:50.084503 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-10 01:07:50.084515 | orchestrator | Wednesday 10 September 2025 01:07:15 +0000 (0:00:00.454) 0:00:42.220 *** 2025-09-10 01:07:50.084532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-10 01:07:50.084567 | orchestrator | 2025-09-10 01:07:50.084578 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-10 01:07:50.084589 | orchestrator | Wednesday 10 September 2025 01:07:17 +0000 (0:00:01.182) 0:00:43.402 *** 2025-09-10 01:07:50.084600 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:50.084610 | orchestrator | 2025-09-10 01:07:50.084621 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-10 01:07:50.084632 | orchestrator | Wednesday 10 September 2025 01:07:19 +0000 (0:00:02.735) 0:00:46.137 *** 2025-09-10 01:07:50.084643 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:50.084654 | orchestrator | 2025-09-10 01:07:50.084665 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-10 01:07:50.084675 | orchestrator | Wednesday 10 September 2025 01:07:22 +0000 (0:00:02.288) 0:00:48.426 *** 2025-09-10 01:07:50.084693 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:50.084704 | orchestrator | 2025-09-10 01:07:50.084715 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-10 01:07:50.084726 | orchestrator | Wednesday 10 September 2025 01:07:35 +0000 (0:00:13.765) 0:01:02.191 *** 2025-09-10 01:07:50.084738 | orchestrator | 2025-09-10 01:07:50.084758 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-10 01:07:50.084777 | orchestrator | Wednesday 10 September 2025 01:07:35 +0000 (0:00:00.066) 0:01:02.257 *** 2025-09-10 01:07:50.084794 | orchestrator | 2025-09-10 01:07:50.084820 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-10 01:07:50.084838 | orchestrator | Wednesday 10 September 2025 01:07:35 +0000 (0:00:00.067) 0:01:02.325 *** 2025-09-10 01:07:50.084856 | orchestrator | 2025-09-10 01:07:50.084874 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-10 01:07:50.084893 | orchestrator | Wednesday 10 September 2025 01:07:36 +0000 (0:00:00.070) 0:01:02.395 *** 2025-09-10 01:07:50.084910 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:07:50.084930 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:07:50.084948 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:07:50.084967 | orchestrator | 2025-09-10 01:07:50.084980 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:07:50.084991 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 01:07:50.085003 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 01:07:50.085014 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 01:07:50.085025 | orchestrator | 2025-09-10 01:07:50.085035 | orchestrator | 2025-09-10 01:07:50.085052 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:07:50.085064 | orchestrator | Wednesday 10 September 2025 01:07:46 +0000 (0:00:10.562) 0:01:12.957 *** 2025-09-10 01:07:50.085074 | orchestrator | =============================================================================== 2025-09-10 01:07:50.085085 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.77s 2025-09-10 01:07:50.085096 | orchestrator | placement : Restart placement-api container ---------------------------- 10.56s 2025-09-10 01:07:50.085106 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.02s 2025-09-10 01:07:50.085117 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.45s 2025-09-10 01:07:50.085128 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.21s 2025-09-10 01:07:50.085138 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.63s 2025-09-10 01:07:50.085149 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.58s 2025-09-10 01:07:50.085160 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.52s 2025-09-10 01:07:50.085170 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.79s 2025-09-10 01:07:50.085181 | orchestrator | placement : Creating placement databases -------------------------------- 2.74s 2025-09-10 01:07:50.085191 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.29s 2025-09-10 01:07:50.085202 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.82s 2025-09-10 01:07:50.085212 | orchestrator | placement : Copying over config.json files for services ----------------- 1.53s 2025-09-10 01:07:50.085223 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.51s 2025-09-10 01:07:50.085233 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.39s 2025-09-10 01:07:50.085253 | orchestrator | placement : Check placement containers ---------------------------------- 1.18s 2025-09-10 01:07:50.085264 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.07s 2025-09-10 01:07:50.085274 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.97s 2025-09-10 01:07:50.085285 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.87s 2025-09-10 01:07:50.085296 | orchestrator | placement : include_tasks ----------------------------------------------- 0.73s 2025-09-10 01:07:50.085307 | orchestrator | 2025-09-10 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:53.131817 | orchestrator | 2025-09-10 01:07:53 | INFO  | Task f589eda0-df10-4fa3-a519-e78f567baafc is in state STARTED 2025-09-10 01:07:53.134112 | orchestrator | 2025-09-10 01:07:53 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:53.136717 | orchestrator | 2025-09-10 01:07:53 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:53.140177 | orchestrator | 2025-09-10 01:07:53 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:53.140202 | orchestrator | 2025-09-10 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:56.181821 | orchestrator | 2025-09-10 01:07:56 | INFO  | Task f589eda0-df10-4fa3-a519-e78f567baafc is in state SUCCESS 2025-09-10 01:07:56.184294 | orchestrator | 2025-09-10 01:07:56 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:56.185095 | orchestrator | 2025-09-10 01:07:56 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:56.186127 | orchestrator | 2025-09-10 01:07:56 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:56.187115 | orchestrator | 2025-09-10 01:07:56 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:07:56.187883 | orchestrator | 2025-09-10 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:07:59.231261 | orchestrator | 2025-09-10 01:07:59 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:07:59.234994 | orchestrator | 2025-09-10 01:07:59 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:07:59.235981 | orchestrator | 2025-09-10 01:07:59 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:07:59.240914 | orchestrator | 2025-09-10 01:07:59 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:07:59.240936 | orchestrator | 2025-09-10 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:02.285067 | orchestrator | 2025-09-10 01:08:02 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:02.285912 | orchestrator | 2025-09-10 01:08:02 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:02.287187 | orchestrator | 2025-09-10 01:08:02 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:02.288975 | orchestrator | 2025-09-10 01:08:02 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:02.289000 | orchestrator | 2025-09-10 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:05.329424 | orchestrator | 2025-09-10 01:08:05 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:05.331017 | orchestrator | 2025-09-10 01:08:05 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:05.332907 | orchestrator | 2025-09-10 01:08:05 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:05.335245 | orchestrator | 2025-09-10 01:08:05 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:05.335274 | orchestrator | 2025-09-10 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:08.372368 | orchestrator | 2025-09-10 01:08:08 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:08.372956 | orchestrator | 2025-09-10 01:08:08 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:08.374255 | orchestrator | 2025-09-10 01:08:08 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:08.376877 | orchestrator | 2025-09-10 01:08:08 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:08.376901 | orchestrator | 2025-09-10 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:11.418827 | orchestrator | 2025-09-10 01:08:11 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:11.419293 | orchestrator | 2025-09-10 01:08:11 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:11.420159 | orchestrator | 2025-09-10 01:08:11 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:11.422628 | orchestrator | 2025-09-10 01:08:11 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:11.422663 | orchestrator | 2025-09-10 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:14.465829 | orchestrator | 2025-09-10 01:08:14 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:14.467952 | orchestrator | 2025-09-10 01:08:14 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:14.469865 | orchestrator | 2025-09-10 01:08:14 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:14.472080 | orchestrator | 2025-09-10 01:08:14 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:14.472177 | orchestrator | 2025-09-10 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:17.527418 | orchestrator | 2025-09-10 01:08:17 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:17.531263 | orchestrator | 2025-09-10 01:08:17 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:17.538182 | orchestrator | 2025-09-10 01:08:17 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:17.542171 | orchestrator | 2025-09-10 01:08:17 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:17.542213 | orchestrator | 2025-09-10 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:20.602240 | orchestrator | 2025-09-10 01:08:20 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:20.603441 | orchestrator | 2025-09-10 01:08:20 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:20.605364 | orchestrator | 2025-09-10 01:08:20 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:20.607305 | orchestrator | 2025-09-10 01:08:20 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:20.607330 | orchestrator | 2025-09-10 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:23.650933 | orchestrator | 2025-09-10 01:08:23 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:23.652243 | orchestrator | 2025-09-10 01:08:23 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:23.655203 | orchestrator | 2025-09-10 01:08:23 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:23.656760 | orchestrator | 2025-09-10 01:08:23 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:23.656786 | orchestrator | 2025-09-10 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:26.710245 | orchestrator | 2025-09-10 01:08:26 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:26.712831 | orchestrator | 2025-09-10 01:08:26 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:26.714374 | orchestrator | 2025-09-10 01:08:26 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:26.716087 | orchestrator | 2025-09-10 01:08:26 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:26.716287 | orchestrator | 2025-09-10 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:29.764415 | orchestrator | 2025-09-10 01:08:29 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:29.766151 | orchestrator | 2025-09-10 01:08:29 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:29.767560 | orchestrator | 2025-09-10 01:08:29 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:29.770467 | orchestrator | 2025-09-10 01:08:29 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:29.770803 | orchestrator | 2025-09-10 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:32.811397 | orchestrator | 2025-09-10 01:08:32 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:32.812405 | orchestrator | 2025-09-10 01:08:32 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:32.813681 | orchestrator | 2025-09-10 01:08:32 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:32.814473 | orchestrator | 2025-09-10 01:08:32 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:32.814538 | orchestrator | 2025-09-10 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:35.851123 | orchestrator | 2025-09-10 01:08:35 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:35.852687 | orchestrator | 2025-09-10 01:08:35 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:35.855423 | orchestrator | 2025-09-10 01:08:35 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:35.857012 | orchestrator | 2025-09-10 01:08:35 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:35.857167 | orchestrator | 2025-09-10 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:38.903739 | orchestrator | 2025-09-10 01:08:38 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:38.903848 | orchestrator | 2025-09-10 01:08:38 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:38.903864 | orchestrator | 2025-09-10 01:08:38 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:38.903876 | orchestrator | 2025-09-10 01:08:38 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:38.903888 | orchestrator | 2025-09-10 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:41.943395 | orchestrator | 2025-09-10 01:08:41 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:41.945647 | orchestrator | 2025-09-10 01:08:41 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:41.946455 | orchestrator | 2025-09-10 01:08:41 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:41.947463 | orchestrator | 2025-09-10 01:08:41 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:41.947709 | orchestrator | 2025-09-10 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:44.998583 | orchestrator | 2025-09-10 01:08:44 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:44.998758 | orchestrator | 2025-09-10 01:08:44 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:45.000700 | orchestrator | 2025-09-10 01:08:45 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:45.002328 | orchestrator | 2025-09-10 01:08:45 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:45.002450 | orchestrator | 2025-09-10 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:48.046172 | orchestrator | 2025-09-10 01:08:48 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:48.047190 | orchestrator | 2025-09-10 01:08:48 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:48.048630 | orchestrator | 2025-09-10 01:08:48 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:48.050400 | orchestrator | 2025-09-10 01:08:48 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:48.050412 | orchestrator | 2025-09-10 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:51.107573 | orchestrator | 2025-09-10 01:08:51 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:51.109842 | orchestrator | 2025-09-10 01:08:51 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:51.111391 | orchestrator | 2025-09-10 01:08:51 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:51.113159 | orchestrator | 2025-09-10 01:08:51 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:51.113186 | orchestrator | 2025-09-10 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:54.156160 | orchestrator | 2025-09-10 01:08:54 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:54.156265 | orchestrator | 2025-09-10 01:08:54 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:54.156281 | orchestrator | 2025-09-10 01:08:54 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:54.156786 | orchestrator | 2025-09-10 01:08:54 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:54.156812 | orchestrator | 2025-09-10 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:08:57.197040 | orchestrator | 2025-09-10 01:08:57 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:08:57.197215 | orchestrator | 2025-09-10 01:08:57 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:08:57.198164 | orchestrator | 2025-09-10 01:08:57 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:08:57.199965 | orchestrator | 2025-09-10 01:08:57 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:08:57.199988 | orchestrator | 2025-09-10 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:00.237027 | orchestrator | 2025-09-10 01:09:00 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:00.237716 | orchestrator | 2025-09-10 01:09:00 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:00.238441 | orchestrator | 2025-09-10 01:09:00 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:09:00.239032 | orchestrator | 2025-09-10 01:09:00 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:00.239057 | orchestrator | 2025-09-10 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:03.278935 | orchestrator | 2025-09-10 01:09:03 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:03.284216 | orchestrator | 2025-09-10 01:09:03 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:03.284263 | orchestrator | 2025-09-10 01:09:03 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:09:03.284276 | orchestrator | 2025-09-10 01:09:03 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:03.284287 | orchestrator | 2025-09-10 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:06.302441 | orchestrator | 2025-09-10 01:09:06 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:06.302666 | orchestrator | 2025-09-10 01:09:06 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:06.303542 | orchestrator | 2025-09-10 01:09:06 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:09:06.304210 | orchestrator | 2025-09-10 01:09:06 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:06.304276 | orchestrator | 2025-09-10 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:09.346208 | orchestrator | 2025-09-10 01:09:09 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:09.347591 | orchestrator | 2025-09-10 01:09:09 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:09.349648 | orchestrator | 2025-09-10 01:09:09 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state STARTED 2025-09-10 01:09:09.351922 | orchestrator | 2025-09-10 01:09:09 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:09.352126 | orchestrator | 2025-09-10 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:12.391278 | orchestrator | 2025-09-10 01:09:12 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:12.393200 | orchestrator | 2025-09-10 01:09:12 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:12.396627 | orchestrator | 2025-09-10 01:09:12 | INFO  | Task aa7ce733-06c2-47fe-9e95-c4eada13f55c is in state SUCCESS 2025-09-10 01:09:12.398800 | orchestrator | 2025-09-10 01:09:12.398836 | orchestrator | 2025-09-10 01:09:12.398847 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:09:12.398858 | orchestrator | 2025-09-10 01:09:12.398867 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:09:12.398877 | orchestrator | Wednesday 10 September 2025 01:07:51 +0000 (0:00:00.182) 0:00:00.182 *** 2025-09-10 01:09:12.398915 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:12.398964 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:12.398976 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:12.398996 | orchestrator | 2025-09-10 01:09:12.399007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:09:12.399017 | orchestrator | Wednesday 10 September 2025 01:07:51 +0000 (0:00:00.296) 0:00:00.478 *** 2025-09-10 01:09:12.399026 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-10 01:09:12.399062 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-10 01:09:12.399072 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-10 01:09:12.399082 | orchestrator | 2025-09-10 01:09:12.399091 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-10 01:09:12.399101 | orchestrator | 2025-09-10 01:09:12.399111 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-10 01:09:12.399134 | orchestrator | Wednesday 10 September 2025 01:07:52 +0000 (0:00:00.620) 0:00:01.099 *** 2025-09-10 01:09:12.399144 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:12.399154 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:12.399163 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:12.399173 | orchestrator | 2025-09-10 01:09:12.399182 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:09:12.399192 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:09:12.399204 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:09:12.399214 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:09:12.399224 | orchestrator | 2025-09-10 01:09:12.399233 | orchestrator | 2025-09-10 01:09:12.399243 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:09:12.399254 | orchestrator | Wednesday 10 September 2025 01:07:52 +0000 (0:00:00.727) 0:00:01.827 *** 2025-09-10 01:09:12.399263 | orchestrator | =============================================================================== 2025-09-10 01:09:12.399273 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.73s 2025-09-10 01:09:12.399283 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-09-10 01:09:12.399292 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-10 01:09:12.399301 | orchestrator | 2025-09-10 01:09:12.399311 | orchestrator | 2025-09-10 01:09:12.399321 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:09:12.399330 | orchestrator | 2025-09-10 01:09:12.399340 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:09:12.399349 | orchestrator | Wednesday 10 September 2025 01:07:10 +0000 (0:00:00.289) 0:00:00.290 *** 2025-09-10 01:09:12.399358 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:12.399368 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:12.399378 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:12.399388 | orchestrator | 2025-09-10 01:09:12.399400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:09:12.399411 | orchestrator | Wednesday 10 September 2025 01:07:10 +0000 (0:00:00.374) 0:00:00.664 *** 2025-09-10 01:09:12.399422 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-10 01:09:12.399434 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-10 01:09:12.399445 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-10 01:09:12.399457 | orchestrator | 2025-09-10 01:09:12.399468 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-10 01:09:12.399479 | orchestrator | 2025-09-10 01:09:12.399514 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-10 01:09:12.399525 | orchestrator | Wednesday 10 September 2025 01:07:11 +0000 (0:00:00.451) 0:00:01.116 *** 2025-09-10 01:09:12.399545 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:12.399557 | orchestrator | 2025-09-10 01:09:12.399568 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-10 01:09:12.399592 | orchestrator | Wednesday 10 September 2025 01:07:11 +0000 (0:00:00.586) 0:00:01.703 *** 2025-09-10 01:09:12.399604 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-10 01:09:12.399616 | orchestrator | 2025-09-10 01:09:12.399627 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-10 01:09:12.399638 | orchestrator | Wednesday 10 September 2025 01:07:15 +0000 (0:00:03.590) 0:00:05.294 *** 2025-09-10 01:09:12.399649 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-10 01:09:12.399662 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-10 01:09:12.399673 | orchestrator | 2025-09-10 01:09:12.399683 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-10 01:09:12.399692 | orchestrator | Wednesday 10 September 2025 01:07:22 +0000 (0:00:07.115) 0:00:12.410 *** 2025-09-10 01:09:12.399702 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:09:12.399711 | orchestrator | 2025-09-10 01:09:12.399721 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-10 01:09:12.399731 | orchestrator | Wednesday 10 September 2025 01:07:25 +0000 (0:00:03.350) 0:00:15.760 *** 2025-09-10 01:09:12.399752 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:09:12.399762 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-10 01:09:12.399772 | orchestrator | 2025-09-10 01:09:12.399782 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-10 01:09:12.399791 | orchestrator | Wednesday 10 September 2025 01:07:29 +0000 (0:00:04.119) 0:00:19.879 *** 2025-09-10 01:09:12.399801 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:09:12.399810 | orchestrator | 2025-09-10 01:09:12.399820 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-10 01:09:12.399829 | orchestrator | Wednesday 10 September 2025 01:07:33 +0000 (0:00:03.351) 0:00:23.230 *** 2025-09-10 01:09:12.399839 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-10 01:09:12.399848 | orchestrator | 2025-09-10 01:09:12.399858 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-10 01:09:12.399867 | orchestrator | Wednesday 10 September 2025 01:07:37 +0000 (0:00:04.514) 0:00:27.745 *** 2025-09-10 01:09:12.399876 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.399886 | orchestrator | 2025-09-10 01:09:12.399895 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-10 01:09:12.399905 | orchestrator | Wednesday 10 September 2025 01:07:41 +0000 (0:00:03.498) 0:00:31.244 *** 2025-09-10 01:09:12.399914 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.399924 | orchestrator | 2025-09-10 01:09:12.399933 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-10 01:09:12.399943 | orchestrator | Wednesday 10 September 2025 01:07:45 +0000 (0:00:03.915) 0:00:35.160 *** 2025-09-10 01:09:12.399952 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.399961 | orchestrator | 2025-09-10 01:09:12.399971 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-10 01:09:12.399980 | orchestrator | Wednesday 10 September 2025 01:07:49 +0000 (0:00:03.929) 0:00:39.089 *** 2025-09-10 01:09:12.399993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400115 | orchestrator | 2025-09-10 01:09:12.400131 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-10 01:09:12.400146 | orchestrator | Wednesday 10 September 2025 01:07:50 +0000 (0:00:01.645) 0:00:40.735 *** 2025-09-10 01:09:12.400163 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:12.400180 | orchestrator | 2025-09-10 01:09:12.400196 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-10 01:09:12.400211 | orchestrator | Wednesday 10 September 2025 01:07:50 +0000 (0:00:00.128) 0:00:40.863 *** 2025-09-10 01:09:12.400227 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:12.400244 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:12.400261 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:12.400278 | orchestrator | 2025-09-10 01:09:12.400296 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-10 01:09:12.400313 | orchestrator | Wednesday 10 September 2025 01:07:51 +0000 (0:00:00.573) 0:00:41.436 *** 2025-09-10 01:09:12.400328 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:09:12.400346 | orchestrator | 2025-09-10 01:09:12.400365 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-10 01:09:12.400383 | orchestrator | Wednesday 10 September 2025 01:07:52 +0000 (0:00:00.926) 0:00:42.363 *** 2025-09-10 01:09:12.400406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400521 | orchestrator | 2025-09-10 01:09:12.400531 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-10 01:09:12.400541 | orchestrator | Wednesday 10 September 2025 01:07:54 +0000 (0:00:02.545) 0:00:44.908 *** 2025-09-10 01:09:12.400551 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:12.400560 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:12.400570 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:12.400580 | orchestrator | 2025-09-10 01:09:12.400589 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-10 01:09:12.400605 | orchestrator | Wednesday 10 September 2025 01:07:55 +0000 (0:00:00.341) 0:00:45.249 *** 2025-09-10 01:09:12.400615 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:12.400625 | orchestrator | 2025-09-10 01:09:12.400635 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-10 01:09:12.400644 | orchestrator | Wednesday 10 September 2025 01:07:56 +0000 (0:00:00.773) 0:00:46.023 *** 2025-09-10 01:09:12.400654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.400697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.400740 | orchestrator | 2025-09-10 01:09:12.400750 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-10 01:09:12.400760 | orchestrator | Wednesday 10 September 2025 01:07:58 +0000 (0:00:02.869) 0:00:48.892 *** 2025-09-10 01:09:12.400770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.400781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.400795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.400806 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:12.400823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.400839 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:12.400849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.400859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.400869 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:12.400879 | orchestrator | 2025-09-10 01:09:12.400888 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-10 01:09:12.400898 | orchestrator | Wednesday 10 September 2025 01:07:59 +0000 (0:00:00.753) 0:00:49.645 *** 2025-09-10 01:09:12.400918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.400929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.400946 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:12.400962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.400972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.400982 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:12.400993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.401008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.401018 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:12.401028 | orchestrator | 2025-09-10 01:09:12.401038 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-10 01:09:12.401047 | orchestrator | Wednesday 10 September 2025 01:08:00 +0000 (0:00:01.103) 0:00:50.749 *** 2025-09-10 01:09:12.401290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401384 | orchestrator | 2025-09-10 01:09:12.401394 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-10 01:09:12.401403 | orchestrator | Wednesday 10 September 2025 01:08:03 +0000 (0:00:02.565) 0:00:53.314 *** 2025-09-10 01:09:12.401414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401570 | orchestrator | 2025-09-10 01:09:12.401580 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-10 01:09:12.401590 | orchestrator | Wednesday 10 September 2025 01:08:08 +0000 (0:00:05.416) 0:00:58.730 *** 2025-09-10 01:09:12.401600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.401610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.401620 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:12.401641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.401658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.401669 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:12.401679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-10 01:09:12.401689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:12.401699 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:12.401711 | orchestrator | 2025-09-10 01:09:12.401727 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-10 01:09:12.401743 | orchestrator | Wednesday 10 September 2025 01:08:09 +0000 (0:00:00.877) 0:00:59.608 *** 2025-09-10 01:09:12.401765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-10 01:09:12.401834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:12.401879 | orchestrator | 2025-09-10 01:09:12.401889 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-10 01:09:12.401899 | orchestrator | Wednesday 10 September 2025 01:08:13 +0000 (0:00:03.404) 0:01:03.012 *** 2025-09-10 01:09:12.401911 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:12.401922 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:12.401933 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:12.401945 | orchestrator | 2025-09-10 01:09:12.401955 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-10 01:09:12.401967 | orchestrator | Wednesday 10 September 2025 01:08:13 +0000 (0:00:00.360) 0:01:03.373 *** 2025-09-10 01:09:12.401979 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.401989 | orchestrator | 2025-09-10 01:09:12.402000 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-10 01:09:12.402011 | orchestrator | Wednesday 10 September 2025 01:08:15 +0000 (0:00:02.259) 0:01:05.632 *** 2025-09-10 01:09:12.402076 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.402088 | orchestrator | 2025-09-10 01:09:12.402099 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-10 01:09:12.402111 | orchestrator | Wednesday 10 September 2025 01:08:17 +0000 (0:00:02.271) 0:01:07.903 *** 2025-09-10 01:09:12.402129 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.402141 | orchestrator | 2025-09-10 01:09:12.402151 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-10 01:09:12.402163 | orchestrator | Wednesday 10 September 2025 01:08:37 +0000 (0:00:19.251) 0:01:27.155 *** 2025-09-10 01:09:12.402174 | orchestrator | 2025-09-10 01:09:12.402185 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-10 01:09:12.402196 | orchestrator | Wednesday 10 September 2025 01:08:37 +0000 (0:00:00.062) 0:01:27.218 *** 2025-09-10 01:09:12.402208 | orchestrator | 2025-09-10 01:09:12.402219 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-10 01:09:12.402230 | orchestrator | Wednesday 10 September 2025 01:08:37 +0000 (0:00:00.063) 0:01:27.282 *** 2025-09-10 01:09:12.402241 | orchestrator | 2025-09-10 01:09:12.402252 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-10 01:09:12.402263 | orchestrator | Wednesday 10 September 2025 01:08:37 +0000 (0:00:00.066) 0:01:27.349 *** 2025-09-10 01:09:12.402273 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.402282 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:12.402292 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:12.402302 | orchestrator | 2025-09-10 01:09:12.402311 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-10 01:09:12.402321 | orchestrator | Wednesday 10 September 2025 01:08:54 +0000 (0:00:17.069) 0:01:44.418 *** 2025-09-10 01:09:12.402330 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:12.402340 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:12.402350 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:12.402359 | orchestrator | 2025-09-10 01:09:12.402369 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:09:12.402379 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-10 01:09:12.402390 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 01:09:12.402407 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 01:09:12.402417 | orchestrator | 2025-09-10 01:09:12.402427 | orchestrator | 2025-09-10 01:09:12.402436 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:09:12.402446 | orchestrator | Wednesday 10 September 2025 01:09:09 +0000 (0:00:15.453) 0:01:59.872 *** 2025-09-10 01:09:12.402455 | orchestrator | =============================================================================== 2025-09-10 01:09:12.402465 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.25s 2025-09-10 01:09:12.402474 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.07s 2025-09-10 01:09:12.402500 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.45s 2025-09-10 01:09:12.402509 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.12s 2025-09-10 01:09:12.402519 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.42s 2025-09-10 01:09:12.402528 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.51s 2025-09-10 01:09:12.402538 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.12s 2025-09-10 01:09:12.402547 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.93s 2025-09-10 01:09:12.402556 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.92s 2025-09-10 01:09:12.402566 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.59s 2025-09-10 01:09:12.402575 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.50s 2025-09-10 01:09:12.402585 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.40s 2025-09-10 01:09:12.402594 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.35s 2025-09-10 01:09:12.402604 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.35s 2025-09-10 01:09:12.402613 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.87s 2025-09-10 01:09:12.402632 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.57s 2025-09-10 01:09:12.402642 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.55s 2025-09-10 01:09:12.402651 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.27s 2025-09-10 01:09:12.402660 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.26s 2025-09-10 01:09:12.402670 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.65s 2025-09-10 01:09:12.402680 | orchestrator | 2025-09-10 01:09:12 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:12.402690 | orchestrator | 2025-09-10 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:15.440656 | orchestrator | 2025-09-10 01:09:15 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:15.441989 | orchestrator | 2025-09-10 01:09:15 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:15.443505 | orchestrator | 2025-09-10 01:09:15 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:15.443534 | orchestrator | 2025-09-10 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:18.492584 | orchestrator | 2025-09-10 01:09:18 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:18.494932 | orchestrator | 2025-09-10 01:09:18 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:18.496707 | orchestrator | 2025-09-10 01:09:18 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:18.496763 | orchestrator | 2025-09-10 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:21.534163 | orchestrator | 2025-09-10 01:09:21 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:21.536369 | orchestrator | 2025-09-10 01:09:21 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:21.537471 | orchestrator | 2025-09-10 01:09:21 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:21.537768 | orchestrator | 2025-09-10 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:24.596604 | orchestrator | 2025-09-10 01:09:24 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:24.598076 | orchestrator | 2025-09-10 01:09:24 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:24.600959 | orchestrator | 2025-09-10 01:09:24 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:24.601054 | orchestrator | 2025-09-10 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:27.646397 | orchestrator | 2025-09-10 01:09:27 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:27.647575 | orchestrator | 2025-09-10 01:09:27 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:27.648609 | orchestrator | 2025-09-10 01:09:27 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:27.649224 | orchestrator | 2025-09-10 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:30.696640 | orchestrator | 2025-09-10 01:09:30 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:30.698014 | orchestrator | 2025-09-10 01:09:30 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:30.699790 | orchestrator | 2025-09-10 01:09:30 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:30.699822 | orchestrator | 2025-09-10 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:33.751934 | orchestrator | 2025-09-10 01:09:33 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:33.753159 | orchestrator | 2025-09-10 01:09:33 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:33.754812 | orchestrator | 2025-09-10 01:09:33 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:33.754836 | orchestrator | 2025-09-10 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:36.803632 | orchestrator | 2025-09-10 01:09:36 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state STARTED 2025-09-10 01:09:36.807141 | orchestrator | 2025-09-10 01:09:36 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:36.808887 | orchestrator | 2025-09-10 01:09:36 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:36.809130 | orchestrator | 2025-09-10 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:39.864979 | orchestrator | 2025-09-10 01:09:39 | INFO  | Task ddbcd126-35f5-410a-b8f8-7c776b3d7d41 is in state SUCCESS 2025-09-10 01:09:39.867362 | orchestrator | 2025-09-10 01:09:39.867569 | orchestrator | 2025-09-10 01:09:39.867594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:09:39.867608 | orchestrator | 2025-09-10 01:09:39.867619 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:09:39.867659 | orchestrator | Wednesday 10 September 2025 01:07:16 +0000 (0:00:00.267) 0:00:00.267 *** 2025-09-10 01:09:39.867671 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:39.867683 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:39.867693 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:39.867704 | orchestrator | 2025-09-10 01:09:39.867715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:09:39.867726 | orchestrator | Wednesday 10 September 2025 01:07:16 +0000 (0:00:00.336) 0:00:00.603 *** 2025-09-10 01:09:39.867737 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-10 01:09:39.867748 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-10 01:09:39.867759 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-10 01:09:39.867770 | orchestrator | 2025-09-10 01:09:39.867781 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-10 01:09:39.867791 | orchestrator | 2025-09-10 01:09:39.867802 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-10 01:09:39.867813 | orchestrator | Wednesday 10 September 2025 01:07:16 +0000 (0:00:00.449) 0:00:01.053 *** 2025-09-10 01:09:39.867823 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:39.867834 | orchestrator | 2025-09-10 01:09:39.867845 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-10 01:09:39.867856 | orchestrator | Wednesday 10 September 2025 01:07:17 +0000 (0:00:00.532) 0:00:01.585 *** 2025-09-10 01:09:39.867870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.867886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.867897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.867909 | orchestrator | 2025-09-10 01:09:39.867922 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-10 01:09:39.867935 | orchestrator | Wednesday 10 September 2025 01:07:18 +0000 (0:00:00.796) 0:00:02.381 *** 2025-09-10 01:09:39.867947 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-10 01:09:39.868104 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-10 01:09:39.868127 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:09:39.868139 | orchestrator | 2025-09-10 01:09:39.868150 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-10 01:09:39.868160 | orchestrator | Wednesday 10 September 2025 01:07:18 +0000 (0:00:00.829) 0:00:03.210 *** 2025-09-10 01:09:39.868171 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:39.868182 | orchestrator | 2025-09-10 01:09:39.868207 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-10 01:09:39.868219 | orchestrator | Wednesday 10 September 2025 01:07:19 +0000 (0:00:00.725) 0:00:03.936 *** 2025-09-10 01:09:39.868250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868285 | orchestrator | 2025-09-10 01:09:39.868296 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-10 01:09:39.868307 | orchestrator | Wednesday 10 September 2025 01:07:21 +0000 (0:00:01.471) 0:00:05.408 *** 2025-09-10 01:09:39.868318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 01:09:39.868329 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.868340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 01:09:39.868359 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.868382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 01:09:39.868394 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.868405 | orchestrator | 2025-09-10 01:09:39.868416 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-10 01:09:39.868427 | orchestrator | Wednesday 10 September 2025 01:07:21 +0000 (0:00:00.356) 0:00:05.764 *** 2025-09-10 01:09:39.868438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 01:09:39.868450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 01:09:39.868461 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.868472 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.868483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-10 01:09:39.868519 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.868530 | orchestrator | 2025-09-10 01:09:39.868540 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-10 01:09:39.868551 | orchestrator | Wednesday 10 September 2025 01:07:22 +0000 (0:00:01.049) 0:00:06.814 *** 2025-09-10 01:09:39.868570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868619 | orchestrator | 2025-09-10 01:09:39.868629 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-10 01:09:39.868640 | orchestrator | Wednesday 10 September 2025 01:07:23 +0000 (0:00:01.269) 0:00:08.083 *** 2025-09-10 01:09:39.868651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.868716 | orchestrator | 2025-09-10 01:09:39.868728 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-10 01:09:39.868739 | orchestrator | Wednesday 10 September 2025 01:07:25 +0000 (0:00:01.365) 0:00:09.449 *** 2025-09-10 01:09:39.868750 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.868760 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.868771 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.868782 | orchestrator | 2025-09-10 01:09:39.868792 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-10 01:09:39.868803 | orchestrator | Wednesday 10 September 2025 01:07:25 +0000 (0:00:00.493) 0:00:09.942 *** 2025-09-10 01:09:39.868813 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-10 01:09:39.868824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-10 01:09:39.868835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-10 01:09:39.868845 | orchestrator | 2025-09-10 01:09:39.868856 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-10 01:09:39.868867 | orchestrator | Wednesday 10 September 2025 01:07:27 +0000 (0:00:01.336) 0:00:11.279 *** 2025-09-10 01:09:39.868877 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-10 01:09:39.868888 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-10 01:09:39.868904 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-10 01:09:39.868915 | orchestrator | 2025-09-10 01:09:39.868926 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-10 01:09:39.868937 | orchestrator | Wednesday 10 September 2025 01:07:28 +0000 (0:00:01.446) 0:00:12.725 *** 2025-09-10 01:09:39.868953 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:09:39.868965 | orchestrator | 2025-09-10 01:09:39.868975 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-10 01:09:39.868986 | orchestrator | Wednesday 10 September 2025 01:07:29 +0000 (0:00:00.939) 0:00:13.664 *** 2025-09-10 01:09:39.868997 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-10 01:09:39.869008 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-10 01:09:39.869018 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:39.869029 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:39.869040 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:39.869050 | orchestrator | 2025-09-10 01:09:39.869061 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-10 01:09:39.869072 | orchestrator | Wednesday 10 September 2025 01:07:30 +0000 (0:00:00.828) 0:00:14.492 *** 2025-09-10 01:09:39.869082 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.869093 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.869104 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.869114 | orchestrator | 2025-09-10 01:09:39.869125 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-10 01:09:39.869136 | orchestrator | Wednesday 10 September 2025 01:07:30 +0000 (0:00:00.557) 0:00:15.049 *** 2025-09-10 01:09:39.869148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093910, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.478427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093910, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.478427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093910, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.478427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094162, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5033438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094162, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5033438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094162, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5033438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094043, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4813008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094043, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4813008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094043, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4813008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094165, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5052514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094165, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5052514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094165, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5052514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094076, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.48653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094076, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.48653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094076, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.48653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094096, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5013747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094096, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5013747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094096, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5013747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093909, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4561963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093909, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4561963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093909, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4561963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094026, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4793656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094026, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4793656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.869511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094026, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4793656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094053, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4823992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094053, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4823992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094053, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4823992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094087, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4888217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094087, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4888217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094087, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4888217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094159, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5020797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094159, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5020797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094159, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5020797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094035, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.480388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094035, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.480388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094035, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.480388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094093, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4904332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094093, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4904332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094093, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4904332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094079, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.488322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094079, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.488322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094079, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.488322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094072, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.484933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094072, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.484933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094072, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.484933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094069, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4841688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094069, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4841688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094069, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4841688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094092, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4891646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094092, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4891646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094092, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4891646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094057, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4839838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094057, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4839838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094057, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.4839838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094157, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5016847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094157, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5016847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094157, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5016847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094322, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5524933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094322, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5524933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094322, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5524933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094219, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5199332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094219, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5199332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094219, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5199332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094193, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5095122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094193, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5095122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094193, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5095122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094244, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5254471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094244, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5254471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094244, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5254471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094178, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5061712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094178, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5061712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094178, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5061712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.870991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094284, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5389335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094284, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5389335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094284, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5389335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094247, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.535571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094247, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.535571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094247, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.535571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094288, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.540994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094288, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.540994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094288, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.540994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094317, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5509336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094317, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5509336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094317, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5509336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094278, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5385492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094278, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5385492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094278, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5385492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094237, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5229702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094237, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5229702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094237, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5229702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094214, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5136445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094214, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5136445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094214, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5136445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094234, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5225732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094234, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5225732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094234, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5225732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094201, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5127115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094201, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5127115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094201, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5127115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094239, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5229702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094239, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5229702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094239, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5229702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094303, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5503607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094303, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5503607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094303, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5503607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094296, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5441034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094296, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5441034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094296, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5441034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094185, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.506933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094185, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.506933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094185, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.506933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094189, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.508368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094189, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.508368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094276, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5369334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094189, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.508368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094291, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5417044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094276, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5369334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094276, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5369334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094291, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5417044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094291, 'dev': 122, 'nlink': 1, 'atime': 1757462531.0, 'mtime': 1757462531.0, 'ctime': 1757463416.5417044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-10 01:09:39.871620 | orchestrator | 2025-09-10 01:09:39.871631 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-10 01:09:39.871642 | orchestrator | Wednesday 10 September 2025 01:08:09 +0000 (0:00:38.932) 0:00:53.982 *** 2025-09-10 01:09:39.871652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.871669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.871679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-10 01:09:39.871689 | orchestrator | 2025-09-10 01:09:39.871699 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-10 01:09:39.871709 | orchestrator | Wednesday 10 September 2025 01:08:11 +0000 (0:00:01.525) 0:00:55.508 *** 2025-09-10 01:09:39.871718 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:39.871728 | orchestrator | 2025-09-10 01:09:39.871738 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-10 01:09:39.871747 | orchestrator | Wednesday 10 September 2025 01:08:13 +0000 (0:00:02.291) 0:00:57.799 *** 2025-09-10 01:09:39.871757 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:39.871766 | orchestrator | 2025-09-10 01:09:39.871780 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-10 01:09:39.871790 | orchestrator | Wednesday 10 September 2025 01:08:15 +0000 (0:00:02.321) 0:01:00.121 *** 2025-09-10 01:09:39.871799 | orchestrator | 2025-09-10 01:09:39.871809 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-10 01:09:39.871942 | orchestrator | Wednesday 10 September 2025 01:08:15 +0000 (0:00:00.068) 0:01:00.189 *** 2025-09-10 01:09:39.871957 | orchestrator | 2025-09-10 01:09:39.871966 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-10 01:09:39.871976 | orchestrator | Wednesday 10 September 2025 01:08:16 +0000 (0:00:00.075) 0:01:00.264 *** 2025-09-10 01:09:39.871985 | orchestrator | 2025-09-10 01:09:39.871995 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-10 01:09:39.872004 | orchestrator | Wednesday 10 September 2025 01:08:16 +0000 (0:00:00.337) 0:01:00.602 *** 2025-09-10 01:09:39.872013 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.872023 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.872032 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:39.872042 | orchestrator | 2025-09-10 01:09:39.872051 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-10 01:09:39.872061 | orchestrator | Wednesday 10 September 2025 01:08:18 +0000 (0:00:01.900) 0:01:02.502 *** 2025-09-10 01:09:39.872070 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.872080 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.872089 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-10 01:09:39.872107 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-10 01:09:39.872117 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-10 01:09:39.872126 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:39.872136 | orchestrator | 2025-09-10 01:09:39.872145 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-10 01:09:39.872155 | orchestrator | Wednesday 10 September 2025 01:08:57 +0000 (0:00:38.933) 0:01:41.436 *** 2025-09-10 01:09:39.872164 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.872174 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:39.872183 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:39.872193 | orchestrator | 2025-09-10 01:09:39.872202 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-10 01:09:39.872212 | orchestrator | Wednesday 10 September 2025 01:09:33 +0000 (0:00:36.386) 0:02:17.823 *** 2025-09-10 01:09:39.872221 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:39.872231 | orchestrator | 2025-09-10 01:09:39.872240 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-10 01:09:39.872250 | orchestrator | Wednesday 10 September 2025 01:09:35 +0000 (0:00:02.251) 0:02:20.074 *** 2025-09-10 01:09:39.872259 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.872269 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:39.872278 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:39.872287 | orchestrator | 2025-09-10 01:09:39.872297 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-10 01:09:39.872306 | orchestrator | Wednesday 10 September 2025 01:09:36 +0000 (0:00:00.514) 0:02:20.589 *** 2025-09-10 01:09:39.872317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-10 01:09:39.872330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-10 01:09:39.872340 | orchestrator | 2025-09-10 01:09:39.872350 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-10 01:09:39.872359 | orchestrator | Wednesday 10 September 2025 01:09:38 +0000 (0:00:02.428) 0:02:23.018 *** 2025-09-10 01:09:39.872369 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:39.872378 | orchestrator | 2025-09-10 01:09:39.872388 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:09:39.872398 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-10 01:09:39.872409 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-10 01:09:39.872418 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-10 01:09:39.872428 | orchestrator | 2025-09-10 01:09:39.872437 | orchestrator | 2025-09-10 01:09:39.872447 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:09:39.872456 | orchestrator | Wednesday 10 September 2025 01:09:39 +0000 (0:00:00.301) 0:02:23.320 *** 2025-09-10 01:09:39.872466 | orchestrator | =============================================================================== 2025-09-10 01:09:39.872475 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.93s 2025-09-10 01:09:39.872513 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.93s 2025-09-10 01:09:39.872528 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.39s 2025-09-10 01:09:39.872538 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.43s 2025-09-10 01:09:39.872548 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.32s 2025-09-10 01:09:39.872563 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.29s 2025-09-10 01:09:39.872575 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2025-09-10 01:09:39.872585 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.90s 2025-09-10 01:09:39.872597 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.52s 2025-09-10 01:09:39.872608 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.47s 2025-09-10 01:09:39.872621 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.45s 2025-09-10 01:09:39.872632 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.37s 2025-09-10 01:09:39.872643 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.34s 2025-09-10 01:09:39.872654 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.27s 2025-09-10 01:09:39.872665 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.05s 2025-09-10 01:09:39.872676 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.94s 2025-09-10 01:09:39.872687 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-09-10 01:09:39.872697 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.83s 2025-09-10 01:09:39.872709 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-09-10 01:09:39.872720 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2025-09-10 01:09:39.872731 | orchestrator | 2025-09-10 01:09:39 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state STARTED 2025-09-10 01:09:39.873339 | orchestrator | 2025-09-10 01:09:39 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:39.873358 | orchestrator | 2025-09-10 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:42.925395 | orchestrator | 2025-09-10 01:09:42 | INFO  | Task d80eb772-100b-4ae2-a680-a11167a8dee6 is in state SUCCESS 2025-09-10 01:09:42.927208 | orchestrator | 2025-09-10 01:09:42.927249 | orchestrator | 2025-09-10 01:09:42.927262 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:09:42.927274 | orchestrator | 2025-09-10 01:09:42.927285 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-10 01:09:42.927296 | orchestrator | Wednesday 10 September 2025 01:00:42 +0000 (0:00:00.282) 0:00:00.282 *** 2025-09-10 01:09:42.927307 | orchestrator | changed: [testbed-manager] 2025-09-10 01:09:42.927319 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.927330 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.927340 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.927351 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.927362 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.927372 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.927383 | orchestrator | 2025-09-10 01:09:42.927394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:09:42.927404 | orchestrator | Wednesday 10 September 2025 01:00:43 +0000 (0:00:00.770) 0:00:01.053 *** 2025-09-10 01:09:42.927415 | orchestrator | changed: [testbed-manager] 2025-09-10 01:09:42.927426 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.927436 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.927447 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.927457 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.927524 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.927537 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.927548 | orchestrator | 2025-09-10 01:09:42.927558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:09:42.927569 | orchestrator | Wednesday 10 September 2025 01:00:43 +0000 (0:00:00.559) 0:00:01.612 *** 2025-09-10 01:09:42.927580 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-10 01:09:42.927591 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-10 01:09:42.927602 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-10 01:09:42.927613 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-10 01:09:42.927623 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-10 01:09:42.927634 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-10 01:09:42.927644 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-10 01:09:42.927655 | orchestrator | 2025-09-10 01:09:42.927665 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-10 01:09:42.927675 | orchestrator | 2025-09-10 01:09:42.927686 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-10 01:09:42.927696 | orchestrator | Wednesday 10 September 2025 01:00:44 +0000 (0:00:01.145) 0:00:02.757 *** 2025-09-10 01:09:42.927707 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.927718 | orchestrator | 2025-09-10 01:09:42.927729 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-10 01:09:42.927739 | orchestrator | Wednesday 10 September 2025 01:00:46 +0000 (0:00:01.230) 0:00:03.988 *** 2025-09-10 01:09:42.927750 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-10 01:09:42.927762 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-10 01:09:42.927772 | orchestrator | 2025-09-10 01:09:42.927783 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-10 01:09:42.927809 | orchestrator | Wednesday 10 September 2025 01:00:50 +0000 (0:00:04.482) 0:00:08.470 *** 2025-09-10 01:09:42.927823 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 01:09:42.927836 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-10 01:09:42.927849 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.927862 | orchestrator | 2025-09-10 01:09:42.927875 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-10 01:09:42.927887 | orchestrator | Wednesday 10 September 2025 01:00:54 +0000 (0:00:04.098) 0:00:12.569 *** 2025-09-10 01:09:42.927899 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.927912 | orchestrator | 2025-09-10 01:09:42.927924 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-10 01:09:42.927936 | orchestrator | Wednesday 10 September 2025 01:00:55 +0000 (0:00:00.749) 0:00:13.319 *** 2025-09-10 01:09:42.927949 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.927962 | orchestrator | 2025-09-10 01:09:42.927975 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-10 01:09:42.927987 | orchestrator | Wednesday 10 September 2025 01:00:56 +0000 (0:00:01.486) 0:00:14.806 *** 2025-09-10 01:09:42.928001 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.928013 | orchestrator | 2025-09-10 01:09:42.928026 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-10 01:09:42.928038 | orchestrator | Wednesday 10 September 2025 01:00:59 +0000 (0:00:02.804) 0:00:17.610 *** 2025-09-10 01:09:42.928051 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.928063 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928076 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928089 | orchestrator | 2025-09-10 01:09:42.928102 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-10 01:09:42.928114 | orchestrator | Wednesday 10 September 2025 01:01:00 +0000 (0:00:00.589) 0:00:18.200 *** 2025-09-10 01:09:42.928136 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.928149 | orchestrator | 2025-09-10 01:09:42.928162 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-10 01:09:42.928172 | orchestrator | Wednesday 10 September 2025 01:01:31 +0000 (0:00:31.098) 0:00:49.300 *** 2025-09-10 01:09:42.928183 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.928194 | orchestrator | 2025-09-10 01:09:42.928204 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-10 01:09:42.928215 | orchestrator | Wednesday 10 September 2025 01:01:45 +0000 (0:00:14.172) 0:01:03.472 *** 2025-09-10 01:09:42.928226 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.928236 | orchestrator | 2025-09-10 01:09:42.928247 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-10 01:09:42.928258 | orchestrator | Wednesday 10 September 2025 01:01:55 +0000 (0:00:10.050) 0:01:13.523 *** 2025-09-10 01:09:42.928279 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.928290 | orchestrator | 2025-09-10 01:09:42.928301 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-10 01:09:42.928311 | orchestrator | Wednesday 10 September 2025 01:01:56 +0000 (0:00:01.115) 0:01:14.638 *** 2025-09-10 01:09:42.928322 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.928333 | orchestrator | 2025-09-10 01:09:42.928343 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-10 01:09:42.928354 | orchestrator | Wednesday 10 September 2025 01:01:57 +0000 (0:00:00.439) 0:01:15.078 *** 2025-09-10 01:09:42.928365 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.928375 | orchestrator | 2025-09-10 01:09:42.928386 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-10 01:09:42.928397 | orchestrator | Wednesday 10 September 2025 01:01:57 +0000 (0:00:00.476) 0:01:15.554 *** 2025-09-10 01:09:42.928407 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.928418 | orchestrator | 2025-09-10 01:09:42.928428 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-10 01:09:42.928439 | orchestrator | Wednesday 10 September 2025 01:02:15 +0000 (0:00:17.663) 0:01:33.218 *** 2025-09-10 01:09:42.928449 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.928459 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928470 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928480 | orchestrator | 2025-09-10 01:09:42.928510 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-10 01:09:42.928521 | orchestrator | 2025-09-10 01:09:42.928532 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-10 01:09:42.928542 | orchestrator | Wednesday 10 September 2025 01:02:15 +0000 (0:00:00.463) 0:01:33.682 *** 2025-09-10 01:09:42.928553 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.928564 | orchestrator | 2025-09-10 01:09:42.928574 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-10 01:09:42.928585 | orchestrator | Wednesday 10 September 2025 01:02:16 +0000 (0:00:00.656) 0:01:34.338 *** 2025-09-10 01:09:42.928596 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928606 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928617 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.928627 | orchestrator | 2025-09-10 01:09:42.928638 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-10 01:09:42.928649 | orchestrator | Wednesday 10 September 2025 01:02:18 +0000 (0:00:02.097) 0:01:36.435 *** 2025-09-10 01:09:42.928659 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928670 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928680 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.928691 | orchestrator | 2025-09-10 01:09:42.928702 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-10 01:09:42.928712 | orchestrator | Wednesday 10 September 2025 01:02:20 +0000 (0:00:02.172) 0:01:38.608 *** 2025-09-10 01:09:42.928730 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.928741 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928752 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928762 | orchestrator | 2025-09-10 01:09:42.928773 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-10 01:09:42.928790 | orchestrator | Wednesday 10 September 2025 01:02:21 +0000 (0:00:00.383) 0:01:38.992 *** 2025-09-10 01:09:42.928801 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-10 01:09:42.928811 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928822 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-10 01:09:42.928832 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928843 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-10 01:09:42.928853 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-10 01:09:42.928864 | orchestrator | 2025-09-10 01:09:42.928875 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-10 01:09:42.928886 | orchestrator | Wednesday 10 September 2025 01:02:31 +0000 (0:00:10.000) 0:01:48.996 *** 2025-09-10 01:09:42.928896 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.928907 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.928917 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.928928 | orchestrator | 2025-09-10 01:09:42.928938 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-10 01:09:42.928949 | orchestrator | Wednesday 10 September 2025 01:02:32 +0000 (0:00:01.292) 0:01:50.289 *** 2025-09-10 01:09:42.928959 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-10 01:09:42.928970 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.928981 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-10 01:09:42.928992 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929002 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-10 01:09:42.929013 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929023 | orchestrator | 2025-09-10 01:09:42.929034 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-10 01:09:42.929044 | orchestrator | Wednesday 10 September 2025 01:02:34 +0000 (0:00:01.706) 0:01:51.996 *** 2025-09-10 01:09:42.929055 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929066 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929076 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.929087 | orchestrator | 2025-09-10 01:09:42.929097 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-10 01:09:42.929108 | orchestrator | Wednesday 10 September 2025 01:02:34 +0000 (0:00:00.580) 0:01:52.576 *** 2025-09-10 01:09:42.929118 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929129 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929139 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.929150 | orchestrator | 2025-09-10 01:09:42.929160 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-10 01:09:42.929171 | orchestrator | Wednesday 10 September 2025 01:02:35 +0000 (0:00:01.209) 0:01:53.785 *** 2025-09-10 01:09:42.929182 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929192 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929213 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.929224 | orchestrator | 2025-09-10 01:09:42.929235 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-10 01:09:42.929246 | orchestrator | Wednesday 10 September 2025 01:02:38 +0000 (0:00:02.652) 0:01:56.437 *** 2025-09-10 01:09:42.929257 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929267 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929278 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.929288 | orchestrator | 2025-09-10 01:09:42.929299 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-10 01:09:42.929310 | orchestrator | Wednesday 10 September 2025 01:02:59 +0000 (0:00:21.104) 0:02:17.542 *** 2025-09-10 01:09:42.929327 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929338 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929348 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.929359 | orchestrator | 2025-09-10 01:09:42.929370 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-10 01:09:42.929380 | orchestrator | Wednesday 10 September 2025 01:03:11 +0000 (0:00:11.988) 0:02:29.531 *** 2025-09-10 01:09:42.929391 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.929401 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929412 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929422 | orchestrator | 2025-09-10 01:09:42.929433 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-10 01:09:42.929443 | orchestrator | Wednesday 10 September 2025 01:03:12 +0000 (0:00:01.194) 0:02:30.726 *** 2025-09-10 01:09:42.929454 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929464 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929475 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.929501 | orchestrator | 2025-09-10 01:09:42.929513 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-10 01:09:42.929524 | orchestrator | Wednesday 10 September 2025 01:03:25 +0000 (0:00:12.890) 0:02:43.616 *** 2025-09-10 01:09:42.929534 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.929545 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929555 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929566 | orchestrator | 2025-09-10 01:09:42.929577 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-10 01:09:42.929587 | orchestrator | Wednesday 10 September 2025 01:03:26 +0000 (0:00:01.103) 0:02:44.720 *** 2025-09-10 01:09:42.929598 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.929609 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.929619 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.929630 | orchestrator | 2025-09-10 01:09:42.929640 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-10 01:09:42.929651 | orchestrator | 2025-09-10 01:09:42.929661 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-10 01:09:42.929672 | orchestrator | Wednesday 10 September 2025 01:03:27 +0000 (0:00:00.559) 0:02:45.280 *** 2025-09-10 01:09:42.929683 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.929694 | orchestrator | 2025-09-10 01:09:42.929705 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-10 01:09:42.929715 | orchestrator | Wednesday 10 September 2025 01:03:27 +0000 (0:00:00.561) 0:02:45.842 *** 2025-09-10 01:09:42.929731 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-10 01:09:42.929742 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-10 01:09:42.929753 | orchestrator | 2025-09-10 01:09:42.929763 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-10 01:09:42.929774 | orchestrator | Wednesday 10 September 2025 01:03:31 +0000 (0:00:03.482) 0:02:49.324 *** 2025-09-10 01:09:42.929785 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-10 01:09:42.929797 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-10 01:09:42.929807 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-10 01:09:42.929818 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-10 01:09:42.929829 | orchestrator | 2025-09-10 01:09:42.929839 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-10 01:09:42.929850 | orchestrator | Wednesday 10 September 2025 01:03:38 +0000 (0:00:07.032) 0:02:56.356 *** 2025-09-10 01:09:42.929869 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:09:42.929880 | orchestrator | 2025-09-10 01:09:42.929890 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-10 01:09:42.929901 | orchestrator | Wednesday 10 September 2025 01:03:41 +0000 (0:00:03.431) 0:02:59.788 *** 2025-09-10 01:09:42.929911 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:09:42.929922 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-10 01:09:42.929933 | orchestrator | 2025-09-10 01:09:42.929943 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-10 01:09:42.929954 | orchestrator | Wednesday 10 September 2025 01:03:45 +0000 (0:00:03.894) 0:03:03.682 *** 2025-09-10 01:09:42.929964 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:09:42.929975 | orchestrator | 2025-09-10 01:09:42.929985 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-10 01:09:42.929996 | orchestrator | Wednesday 10 September 2025 01:03:49 +0000 (0:00:03.450) 0:03:07.133 *** 2025-09-10 01:09:42.930007 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-10 01:09:42.930071 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-10 01:09:42.930086 | orchestrator | 2025-09-10 01:09:42.930097 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-10 01:09:42.930124 | orchestrator | Wednesday 10 September 2025 01:03:57 +0000 (0:00:08.124) 0:03:15.257 *** 2025-09-10 01:09:42.930142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.930173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.930196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.930217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.930231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.930242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.930253 | orchestrator | 2025-09-10 01:09:42.930263 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-10 01:09:42.930275 | orchestrator | Wednesday 10 September 2025 01:03:59 +0000 (0:00:02.025) 0:03:17.283 *** 2025-09-10 01:09:42.930285 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.930296 | orchestrator | 2025-09-10 01:09:42.930307 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-10 01:09:42.930317 | orchestrator | Wednesday 10 September 2025 01:03:59 +0000 (0:00:00.306) 0:03:17.590 *** 2025-09-10 01:09:42.930328 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.930338 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.930349 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.930360 | orchestrator | 2025-09-10 01:09:42.930370 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-10 01:09:42.930381 | orchestrator | Wednesday 10 September 2025 01:04:00 +0000 (0:00:00.513) 0:03:18.103 *** 2025-09-10 01:09:42.930399 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-10 01:09:42.930410 | orchestrator | 2025-09-10 01:09:42.930420 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-10 01:09:42.930431 | orchestrator | Wednesday 10 September 2025 01:04:00 +0000 (0:00:00.672) 0:03:18.776 *** 2025-09-10 01:09:42.930446 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.930458 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.930468 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.930478 | orchestrator | 2025-09-10 01:09:42.930510 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-10 01:09:42.930522 | orchestrator | Wednesday 10 September 2025 01:04:01 +0000 (0:00:00.431) 0:03:19.208 *** 2025-09-10 01:09:42.930532 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.930543 | orchestrator | 2025-09-10 01:09:42.930554 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-10 01:09:42.930564 | orchestrator | Wednesday 10 September 2025 01:04:01 +0000 (0:00:00.523) 0:03:19.732 *** 2025-09-10 01:09:42.930576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.930603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.930622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.930643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.930655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.930672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.930684 | orchestrator | 2025-09-10 01:09:42.930695 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-10 01:09:42.930706 | orchestrator | Wednesday 10 September 2025 01:04:04 +0000 (0:00:02.671) 0:03:22.407 *** 2025-09-10 01:09:42.930717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.930743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.930756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.930767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.930778 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.930789 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.930809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.930821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.930839 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.930850 | orchestrator | 2025-09-10 01:09:42.930861 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-10 01:09:42.930872 | orchestrator | Wednesday 10 September 2025 01:04:06 +0000 (0:00:01.640) 0:03:24.047 *** 2025-09-10 01:09:42.930889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.930901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.930912 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.930932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.930944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.930967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.930980 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.930991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.931003 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.931013 | orchestrator | 2025-09-10 01:09:42.931024 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-10 01:09:42.931035 | orchestrator | Wednesday 10 September 2025 01:04:07 +0000 (0:00:01.391) 0:03:25.438 *** 2025-09-10 01:09:42.931054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931155 | orchestrator | 2025-09-10 01:09:42.931166 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-10 01:09:42.931177 | orchestrator | Wednesday 10 September 2025 01:04:10 +0000 (0:00:02.837) 0:03:28.275 *** 2025-09-10 01:09:42.931189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931282 | orchestrator | 2025-09-10 01:09:42.931298 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-10 01:09:42.931309 | orchestrator | Wednesday 10 September 2025 01:04:18 +0000 (0:00:08.586) 0:03:36.862 *** 2025-09-10 01:09:42.931321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.931338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.931349 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.931367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.931379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.931390 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.931405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-10 01:09:42.931418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.931429 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.931439 | orchestrator | 2025-09-10 01:09:42.931450 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-10 01:09:42.931461 | orchestrator | Wednesday 10 September 2025 01:04:19 +0000 (0:00:00.901) 0:03:37.763 *** 2025-09-10 01:09:42.931478 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.931548 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.931560 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.931571 | orchestrator | 2025-09-10 01:09:42.931588 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-10 01:09:42.931599 | orchestrator | Wednesday 10 September 2025 01:04:22 +0000 (0:00:02.357) 0:03:40.121 *** 2025-09-10 01:09:42.931610 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.931621 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.931632 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.931642 | orchestrator | 2025-09-10 01:09:42.931653 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-10 01:09:42.931663 | orchestrator | Wednesday 10 September 2025 01:04:22 +0000 (0:00:00.668) 0:03:40.789 *** 2025-09-10 01:09:42.931675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-10 01:09:42.931755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.931766 | orchestrator | 2025-09-10 01:09:42.931776 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-10 01:09:42.931787 | orchestrator | Wednesday 10 September 2025 01:04:26 +0000 (0:00:03.433) 0:03:44.223 *** 2025-09-10 01:09:42.931798 | orchestrator | 2025-09-10 01:09:42.931809 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-10 01:09:42.931819 | orchestrator | Wednesday 10 September 2025 01:04:26 +0000 (0:00:00.250) 0:03:44.473 *** 2025-09-10 01:09:42.931830 | orchestrator | 2025-09-10 01:09:42.931840 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-10 01:09:42.931851 | orchestrator | Wednesday 10 September 2025 01:04:26 +0000 (0:00:00.124) 0:03:44.597 *** 2025-09-10 01:09:42.931862 | orchestrator | 2025-09-10 01:09:42.931877 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-10 01:09:42.931888 | orchestrator | Wednesday 10 September 2025 01:04:26 +0000 (0:00:00.154) 0:03:44.752 *** 2025-09-10 01:09:42.931898 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.931910 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.931920 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.931930 | orchestrator | 2025-09-10 01:09:42.931941 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-10 01:09:42.931952 | orchestrator | Wednesday 10 September 2025 01:04:51 +0000 (0:00:24.185) 0:04:08.937 *** 2025-09-10 01:09:42.931962 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.931973 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.931984 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.932003 | orchestrator | 2025-09-10 01:09:42.932014 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-10 01:09:42.932024 | orchestrator | 2025-09-10 01:09:42.932035 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-10 01:09:42.932045 | orchestrator | Wednesday 10 September 2025 01:05:01 +0000 (0:00:10.927) 0:04:19.864 *** 2025-09-10 01:09:42.932056 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.932067 | orchestrator | 2025-09-10 01:09:42.932076 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-10 01:09:42.932086 | orchestrator | Wednesday 10 September 2025 01:05:03 +0000 (0:00:01.452) 0:04:21.316 *** 2025-09-10 01:09:42.932095 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.932104 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.932114 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.932123 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.932132 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.932142 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.932151 | orchestrator | 2025-09-10 01:09:42.932160 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-10 01:09:42.932170 | orchestrator | Wednesday 10 September 2025 01:05:04 +0000 (0:00:00.937) 0:04:22.254 *** 2025-09-10 01:09:42.932179 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.932189 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.932198 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.932207 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:09:42.932217 | orchestrator | 2025-09-10 01:09:42.932226 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-10 01:09:42.932240 | orchestrator | Wednesday 10 September 2025 01:05:05 +0000 (0:00:01.623) 0:04:23.878 *** 2025-09-10 01:09:42.932250 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-10 01:09:42.932260 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-10 01:09:42.932269 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-10 01:09:42.932278 | orchestrator | 2025-09-10 01:09:42.932288 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-10 01:09:42.932297 | orchestrator | Wednesday 10 September 2025 01:05:06 +0000 (0:00:00.887) 0:04:24.766 *** 2025-09-10 01:09:42.932307 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-10 01:09:42.932316 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-10 01:09:42.932325 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-10 01:09:42.932335 | orchestrator | 2025-09-10 01:09:42.932344 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-10 01:09:42.932353 | orchestrator | Wednesday 10 September 2025 01:05:08 +0000 (0:00:01.425) 0:04:26.191 *** 2025-09-10 01:09:42.932363 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-10 01:09:42.932372 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.932382 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-10 01:09:42.932391 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.932401 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-10 01:09:42.932410 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.932419 | orchestrator | 2025-09-10 01:09:42.932429 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-10 01:09:42.932438 | orchestrator | Wednesday 10 September 2025 01:05:09 +0000 (0:00:01.549) 0:04:27.741 *** 2025-09-10 01:09:42.932448 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 01:09:42.932457 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 01:09:42.932467 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.932483 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-10 01:09:42.932509 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 01:09:42.932519 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 01:09:42.932528 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-10 01:09:42.932537 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.932547 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-10 01:09:42.932556 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-10 01:09:42.932566 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.932575 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-10 01:09:42.932585 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-10 01:09:42.932594 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-10 01:09:42.932609 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-10 01:09:42.932618 | orchestrator | 2025-09-10 01:09:42.932628 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-10 01:09:42.932637 | orchestrator | Wednesday 10 September 2025 01:05:12 +0000 (0:00:02.769) 0:04:30.510 *** 2025-09-10 01:09:42.932646 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.932656 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.932665 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.932675 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.932684 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.932693 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.932703 | orchestrator | 2025-09-10 01:09:42.932712 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-10 01:09:42.932722 | orchestrator | Wednesday 10 September 2025 01:05:14 +0000 (0:00:01.870) 0:04:32.380 *** 2025-09-10 01:09:42.932731 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.932740 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.932750 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.932759 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.932769 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.932778 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.932787 | orchestrator | 2025-09-10 01:09:42.932797 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-10 01:09:42.932806 | orchestrator | Wednesday 10 September 2025 01:05:17 +0000 (0:00:02.675) 0:04:35.056 *** 2025-09-10 01:09:42.932817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.932995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933021 | orchestrator | 2025-09-10 01:09:42.933031 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-10 01:09:42.933040 | orchestrator | Wednesday 10 September 2025 01:05:19 +0000 (0:00:02.640) 0:04:37.696 *** 2025-09-10 01:09:42.933050 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:09:42.933060 | orchestrator | 2025-09-10 01:09:42.933069 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-10 01:09:42.933079 | orchestrator | Wednesday 10 September 2025 01:05:20 +0000 (0:00:01.070) 0:04:38.766 *** 2025-09-10 01:09:42.933093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.933281 | orchestrator | 2025-09-10 01:09:42.933291 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-10 01:09:42.933300 | orchestrator | Wednesday 10 September 2025 01:05:24 +0000 (0:00:03.655) 0:04:42.421 *** 2025-09-10 01:09:42.933392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.933406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.933417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933426 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.933441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.933452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.933474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933484 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.933512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.933522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933532 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.933542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.933556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933567 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.933577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.933598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.933608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933618 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.933628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.933638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933648 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.933658 | orchestrator | 2025-09-10 01:09:42.933667 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-10 01:09:42.933681 | orchestrator | Wednesday 10 September 2025 01:05:27 +0000 (0:00:02.968) 0:04:45.390 *** 2025-09-10 01:09:42.933691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.933707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.933724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933734 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.933744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.933754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933764 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.933778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.933789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933807 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.933817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.933834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.933845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933854 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.933864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.933878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.933895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933905 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.933915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.933929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.933939 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.933949 | orchestrator | 2025-09-10 01:09:42.933958 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-10 01:09:42.933968 | orchestrator | Wednesday 10 September 2025 01:05:29 +0000 (0:00:01.991) 0:04:47.382 *** 2025-09-10 01:09:42.933978 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.933987 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.933997 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.934006 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-10 01:09:42.934055 | orchestrator | 2025-09-10 01:09:42.934068 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-10 01:09:42.934078 | orchestrator | Wednesday 10 September 2025 01:05:30 +0000 (0:00:00.861) 0:04:48.244 *** 2025-09-10 01:09:42.934087 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-10 01:09:42.934097 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-10 01:09:42.934106 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-10 01:09:42.934116 | orchestrator | 2025-09-10 01:09:42.934125 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-10 01:09:42.934135 | orchestrator | Wednesday 10 September 2025 01:05:31 +0000 (0:00:00.831) 0:04:49.075 *** 2025-09-10 01:09:42.934144 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-10 01:09:42.934153 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-10 01:09:42.934163 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-10 01:09:42.934178 | orchestrator | 2025-09-10 01:09:42.934188 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-10 01:09:42.934197 | orchestrator | Wednesday 10 September 2025 01:05:32 +0000 (0:00:00.845) 0:04:49.921 *** 2025-09-10 01:09:42.934207 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:09:42.934216 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:09:42.934226 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:09:42.934235 | orchestrator | 2025-09-10 01:09:42.934244 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-10 01:09:42.934254 | orchestrator | Wednesday 10 September 2025 01:05:32 +0000 (0:00:00.400) 0:04:50.322 *** 2025-09-10 01:09:42.934263 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:09:42.934273 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:09:42.934282 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:09:42.934291 | orchestrator | 2025-09-10 01:09:42.934301 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-10 01:09:42.934310 | orchestrator | Wednesday 10 September 2025 01:05:33 +0000 (0:00:00.685) 0:04:51.007 *** 2025-09-10 01:09:42.934326 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-10 01:09:42.934335 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-10 01:09:42.934345 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-10 01:09:42.934354 | orchestrator | 2025-09-10 01:09:42.934364 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-10 01:09:42.934373 | orchestrator | Wednesday 10 September 2025 01:05:34 +0000 (0:00:01.172) 0:04:52.180 *** 2025-09-10 01:09:42.934383 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-10 01:09:42.934392 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-10 01:09:42.934402 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-10 01:09:42.934411 | orchestrator | 2025-09-10 01:09:42.934420 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-10 01:09:42.934430 | orchestrator | Wednesday 10 September 2025 01:05:35 +0000 (0:00:01.190) 0:04:53.371 *** 2025-09-10 01:09:42.934439 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-10 01:09:42.934449 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-10 01:09:42.934458 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-10 01:09:42.934467 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-10 01:09:42.934477 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-10 01:09:42.934500 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-10 01:09:42.934510 | orchestrator | 2025-09-10 01:09:42.934520 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-10 01:09:42.934529 | orchestrator | Wednesday 10 September 2025 01:05:39 +0000 (0:00:03.828) 0:04:57.199 *** 2025-09-10 01:09:42.934539 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.934549 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.934558 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.934568 | orchestrator | 2025-09-10 01:09:42.934577 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-10 01:09:42.934587 | orchestrator | Wednesday 10 September 2025 01:05:39 +0000 (0:00:00.493) 0:04:57.693 *** 2025-09-10 01:09:42.934597 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.934606 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.934616 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.934625 | orchestrator | 2025-09-10 01:09:42.934635 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-10 01:09:42.934645 | orchestrator | Wednesday 10 September 2025 01:05:40 +0000 (0:00:00.322) 0:04:58.016 *** 2025-09-10 01:09:42.934655 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.934664 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.934674 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.934689 | orchestrator | 2025-09-10 01:09:42.934710 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-10 01:09:42.934720 | orchestrator | Wednesday 10 September 2025 01:05:41 +0000 (0:00:01.200) 0:04:59.217 *** 2025-09-10 01:09:42.934730 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-10 01:09:42.934740 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-10 01:09:42.934750 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-10 01:09:42.934760 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-10 01:09:42.934769 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-10 01:09:42.934779 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-10 01:09:42.934788 | orchestrator | 2025-09-10 01:09:42.934798 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-10 01:09:42.934807 | orchestrator | Wednesday 10 September 2025 01:05:44 +0000 (0:00:03.265) 0:05:02.483 *** 2025-09-10 01:09:42.934817 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 01:09:42.934826 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 01:09:42.934836 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 01:09:42.934845 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-10 01:09:42.934854 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.934864 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-10 01:09:42.934873 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.934882 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-10 01:09:42.934892 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.934901 | orchestrator | 2025-09-10 01:09:42.934911 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-10 01:09:42.934920 | orchestrator | Wednesday 10 September 2025 01:05:48 +0000 (0:00:04.107) 0:05:06.590 *** 2025-09-10 01:09:42.934929 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.934939 | orchestrator | 2025-09-10 01:09:42.934948 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-10 01:09:42.934958 | orchestrator | Wednesday 10 September 2025 01:05:48 +0000 (0:00:00.170) 0:05:06.761 *** 2025-09-10 01:09:42.934967 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.934976 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.934986 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.934995 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.935005 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.935014 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.935023 | orchestrator | 2025-09-10 01:09:42.935037 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-10 01:09:42.935047 | orchestrator | Wednesday 10 September 2025 01:05:49 +0000 (0:00:00.615) 0:05:07.376 *** 2025-09-10 01:09:42.935057 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-10 01:09:42.935066 | orchestrator | 2025-09-10 01:09:42.935075 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-10 01:09:42.935085 | orchestrator | Wednesday 10 September 2025 01:05:50 +0000 (0:00:00.694) 0:05:08.071 *** 2025-09-10 01:09:42.935094 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.935103 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.935113 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.935122 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.935131 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.935147 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.935156 | orchestrator | 2025-09-10 01:09:42.935166 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-10 01:09:42.935176 | orchestrator | Wednesday 10 September 2025 01:05:51 +0000 (0:00:00.828) 0:05:08.899 *** 2025-09-10 01:09:42.935186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935373 | orchestrator | 2025-09-10 01:09:42.935383 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-10 01:09:42.935392 | orchestrator | Wednesday 10 September 2025 01:05:54 +0000 (0:00:03.660) 0:05:12.560 *** 2025-09-10 01:09:42.935402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.935417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.935438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.935448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.935464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.935474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.935484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.935627 | orchestrator | 2025-09-10 01:09:42.935637 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-10 01:09:42.935647 | orchestrator | Wednesday 10 September 2025 01:06:03 +0000 (0:00:08.857) 0:05:21.418 *** 2025-09-10 01:09:42.935656 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.935666 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.935676 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.935685 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.935695 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.935704 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.935713 | orchestrator | 2025-09-10 01:09:42.935723 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-10 01:09:42.935732 | orchestrator | Wednesday 10 September 2025 01:06:04 +0000 (0:00:01.257) 0:05:22.676 *** 2025-09-10 01:09:42.935742 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-10 01:09:42.935751 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-10 01:09:42.935761 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-10 01:09:42.935770 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-10 01:09:42.935784 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-10 01:09:42.935794 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-10 01:09:42.935803 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-10 01:09:42.935813 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.935822 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-10 01:09:42.935832 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.935841 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-10 01:09:42.935851 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.935860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-10 01:09:42.935869 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-10 01:09:42.935879 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-10 01:09:42.935894 | orchestrator | 2025-09-10 01:09:42.935904 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-10 01:09:42.935913 | orchestrator | Wednesday 10 September 2025 01:06:10 +0000 (0:00:05.441) 0:05:28.117 *** 2025-09-10 01:09:42.935923 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.935932 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.935942 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.935951 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.935960 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.935970 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.935979 | orchestrator | 2025-09-10 01:09:42.935989 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-10 01:09:42.935999 | orchestrator | Wednesday 10 September 2025 01:06:10 +0000 (0:00:00.595) 0:05:28.712 *** 2025-09-10 01:09:42.936008 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-10 01:09:42.936018 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-10 01:09:42.936028 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-10 01:09:42.936037 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-10 01:09:42.936047 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-10 01:09:42.936056 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-10 01:09:42.936070 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-10 01:09:42.936080 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-10 01:09:42.936089 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-10 01:09:42.936099 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-10 01:09:42.936108 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.936118 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-10 01:09:42.936127 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-10 01:09:42.936136 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.936146 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-10 01:09:42.936155 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.936165 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-10 01:09:42.936174 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-10 01:09:42.936183 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-10 01:09:42.936193 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-10 01:09:42.936202 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-10 01:09:42.936211 | orchestrator | 2025-09-10 01:09:42.936221 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-10 01:09:42.936230 | orchestrator | Wednesday 10 September 2025 01:06:16 +0000 (0:00:05.771) 0:05:34.484 *** 2025-09-10 01:09:42.936240 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-10 01:09:42.936255 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-10 01:09:42.936269 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-10 01:09:42.936278 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-10 01:09:42.936288 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-10 01:09:42.936297 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-10 01:09:42.936306 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-10 01:09:42.936316 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-10 01:09:42.936325 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-10 01:09:42.936335 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-10 01:09:42.936344 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-10 01:09:42.936353 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-10 01:09:42.936363 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-10 01:09:42.936372 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.936382 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-10 01:09:42.936391 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-10 01:09:42.936400 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-10 01:09:42.936410 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.936419 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-10 01:09:42.936428 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-10 01:09:42.936438 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.936447 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-10 01:09:42.936457 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-10 01:09:42.936466 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-10 01:09:42.936475 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-10 01:09:42.936499 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-10 01:09:42.936510 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-10 01:09:42.936519 | orchestrator | 2025-09-10 01:09:42.936528 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-10 01:09:42.936542 | orchestrator | Wednesday 10 September 2025 01:06:25 +0000 (0:00:09.104) 0:05:43.588 *** 2025-09-10 01:09:42.936552 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.936562 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.936571 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.936581 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.936590 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.936600 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.936609 | orchestrator | 2025-09-10 01:09:42.936618 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-10 01:09:42.936628 | orchestrator | Wednesday 10 September 2025 01:06:26 +0000 (0:00:00.851) 0:05:44.439 *** 2025-09-10 01:09:42.936637 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.936647 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.936656 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.936673 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.936682 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.936691 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.936701 | orchestrator | 2025-09-10 01:09:42.936710 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-10 01:09:42.936720 | orchestrator | Wednesday 10 September 2025 01:06:27 +0000 (0:00:00.863) 0:05:45.303 *** 2025-09-10 01:09:42.936729 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.936739 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.936748 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.936758 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.936767 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.936776 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.936786 | orchestrator | 2025-09-10 01:09:42.936795 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-10 01:09:42.936805 | orchestrator | Wednesday 10 September 2025 01:06:30 +0000 (0:00:03.404) 0:05:48.707 *** 2025-09-10 01:09:42.936820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.936831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.936841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.936851 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.936865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.936882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.936892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.936901 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.936917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-10 01:09:42.936927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-10 01:09:42.936937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.936953 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.936968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.936978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.936988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.937003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.937013 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.937023 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.937033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-10 01:09:42.937043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-10 01:09:42.937052 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.937068 | orchestrator | 2025-09-10 01:09:42.937078 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-10 01:09:42.937087 | orchestrator | Wednesday 10 September 2025 01:06:32 +0000 (0:00:01.394) 0:05:50.102 *** 2025-09-10 01:09:42.937097 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-10 01:09:42.937106 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-10 01:09:42.937116 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-10 01:09:42.937125 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-10 01:09:42.937135 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.937144 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-10 01:09:42.937158 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-10 01:09:42.937168 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.937177 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-10 01:09:42.937187 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-10 01:09:42.937196 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.937206 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-10 01:09:42.937215 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-10 01:09:42.937225 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.937234 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.937243 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-10 01:09:42.937253 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-10 01:09:42.937262 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.937272 | orchestrator | 2025-09-10 01:09:42.937281 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-10 01:09:42.937291 | orchestrator | Wednesday 10 September 2025 01:06:32 +0000 (0:00:00.633) 0:05:50.736 *** 2025-09-10 01:09:42.937301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-10 01:09:42.937513 | orchestrator | 2025-09-10 01:09:42.937523 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-10 01:09:42.937533 | orchestrator | Wednesday 10 September 2025 01:06:35 +0000 (0:00:02.770) 0:05:53.506 *** 2025-09-10 01:09:42.937542 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.937552 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.937562 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.937571 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.937580 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.937590 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.937599 | orchestrator | 2025-09-10 01:09:42.937609 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-10 01:09:42.937618 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:00.645) 0:05:54.152 *** 2025-09-10 01:09:42.937628 | orchestrator | 2025-09-10 01:09:42.937637 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-10 01:09:42.937647 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:00.126) 0:05:54.278 *** 2025-09-10 01:09:42.937656 | orchestrator | 2025-09-10 01:09:42.937665 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-10 01:09:42.937675 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:00.123) 0:05:54.401 *** 2025-09-10 01:09:42.937684 | orchestrator | 2025-09-10 01:09:42.937694 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-10 01:09:42.937703 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:00.123) 0:05:54.525 *** 2025-09-10 01:09:42.937712 | orchestrator | 2025-09-10 01:09:42.937722 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-10 01:09:42.937731 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:00.124) 0:05:54.650 *** 2025-09-10 01:09:42.937741 | orchestrator | 2025-09-10 01:09:42.937755 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-10 01:09:42.937765 | orchestrator | Wednesday 10 September 2025 01:06:36 +0000 (0:00:00.129) 0:05:54.780 *** 2025-09-10 01:09:42.937774 | orchestrator | 2025-09-10 01:09:42.937784 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-10 01:09:42.937793 | orchestrator | Wednesday 10 September 2025 01:06:37 +0000 (0:00:00.262) 0:05:55.043 *** 2025-09-10 01:09:42.937803 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.937812 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.937822 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.937831 | orchestrator | 2025-09-10 01:09:42.937840 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-10 01:09:42.937850 | orchestrator | Wednesday 10 September 2025 01:06:48 +0000 (0:00:11.548) 0:06:06.591 *** 2025-09-10 01:09:42.937859 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.937869 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.937878 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.937888 | orchestrator | 2025-09-10 01:09:42.937897 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-10 01:09:42.937907 | orchestrator | Wednesday 10 September 2025 01:07:00 +0000 (0:00:11.846) 0:06:18.438 *** 2025-09-10 01:09:42.937916 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.937926 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.937935 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.937945 | orchestrator | 2025-09-10 01:09:42.937954 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-10 01:09:42.937964 | orchestrator | Wednesday 10 September 2025 01:07:26 +0000 (0:00:25.617) 0:06:44.056 *** 2025-09-10 01:09:42.937979 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.937988 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.937998 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.938007 | orchestrator | 2025-09-10 01:09:42.938048 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-10 01:09:42.938060 | orchestrator | Wednesday 10 September 2025 01:08:00 +0000 (0:00:34.102) 0:07:18.158 *** 2025-09-10 01:09:42.938070 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-10 01:09:42.938079 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.938089 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-10 01:09:42.938098 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.938108 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.938117 | orchestrator | 2025-09-10 01:09:42.938127 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-10 01:09:42.938136 | orchestrator | Wednesday 10 September 2025 01:08:06 +0000 (0:00:06.527) 0:07:24.686 *** 2025-09-10 01:09:42.938151 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.938161 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.938170 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.938180 | orchestrator | 2025-09-10 01:09:42.938190 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-10 01:09:42.938200 | orchestrator | Wednesday 10 September 2025 01:08:07 +0000 (0:00:00.961) 0:07:25.647 *** 2025-09-10 01:09:42.938209 | orchestrator | changed: [testbed-node-3] 2025-09-10 01:09:42.938218 | orchestrator | changed: [testbed-node-5] 2025-09-10 01:09:42.938228 | orchestrator | changed: [testbed-node-4] 2025-09-10 01:09:42.938237 | orchestrator | 2025-09-10 01:09:42.938247 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-10 01:09:42.938256 | orchestrator | Wednesday 10 September 2025 01:08:31 +0000 (0:00:23.351) 0:07:48.999 *** 2025-09-10 01:09:42.938265 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.938275 | orchestrator | 2025-09-10 01:09:42.938284 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-10 01:09:42.938294 | orchestrator | Wednesday 10 September 2025 01:08:31 +0000 (0:00:00.126) 0:07:49.126 *** 2025-09-10 01:09:42.938303 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.938313 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.938322 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.938332 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.938341 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.938351 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-10 01:09:42.938360 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 01:09:42.938370 | orchestrator | 2025-09-10 01:09:42.938379 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-10 01:09:42.938388 | orchestrator | Wednesday 10 September 2025 01:08:53 +0000 (0:00:22.650) 0:08:11.777 *** 2025-09-10 01:09:42.938398 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.938407 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.938416 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.938426 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.938435 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.938444 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.938454 | orchestrator | 2025-09-10 01:09:42.938463 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-10 01:09:42.938472 | orchestrator | Wednesday 10 September 2025 01:09:04 +0000 (0:00:10.912) 0:08:22.689 *** 2025-09-10 01:09:42.938482 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.938535 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.938553 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.938562 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.938572 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.938580 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-10 01:09:42.938588 | orchestrator | 2025-09-10 01:09:42.938595 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-10 01:09:42.938603 | orchestrator | Wednesday 10 September 2025 01:09:07 +0000 (0:00:03.174) 0:08:25.863 *** 2025-09-10 01:09:42.938611 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 01:09:42.938618 | orchestrator | 2025-09-10 01:09:42.938631 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-10 01:09:42.938639 | orchestrator | Wednesday 10 September 2025 01:09:19 +0000 (0:00:11.941) 0:08:37.805 *** 2025-09-10 01:09:42.938647 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 01:09:42.938654 | orchestrator | 2025-09-10 01:09:42.938662 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-10 01:09:42.938670 | orchestrator | Wednesday 10 September 2025 01:09:21 +0000 (0:00:01.290) 0:08:39.096 *** 2025-09-10 01:09:42.938678 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.938685 | orchestrator | 2025-09-10 01:09:42.938693 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-10 01:09:42.938701 | orchestrator | Wednesday 10 September 2025 01:09:22 +0000 (0:00:01.205) 0:08:40.302 *** 2025-09-10 01:09:42.938709 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-10 01:09:42.938716 | orchestrator | 2025-09-10 01:09:42.938724 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-10 01:09:42.938732 | orchestrator | Wednesday 10 September 2025 01:09:32 +0000 (0:00:10.527) 0:08:50.829 *** 2025-09-10 01:09:42.938740 | orchestrator | ok: [testbed-node-3] 2025-09-10 01:09:42.938748 | orchestrator | ok: [testbed-node-4] 2025-09-10 01:09:42.938755 | orchestrator | ok: [testbed-node-5] 2025-09-10 01:09:42.938763 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:09:42.938771 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:09:42.938779 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:09:42.938786 | orchestrator | 2025-09-10 01:09:42.938794 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-10 01:09:42.938802 | orchestrator | 2025-09-10 01:09:42.938810 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-10 01:09:42.938817 | orchestrator | Wednesday 10 September 2025 01:09:34 +0000 (0:00:01.800) 0:08:52.629 *** 2025-09-10 01:09:42.938825 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:09:42.938833 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:09:42.938841 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:09:42.938848 | orchestrator | 2025-09-10 01:09:42.938856 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-10 01:09:42.938864 | orchestrator | 2025-09-10 01:09:42.938871 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-10 01:09:42.938879 | orchestrator | Wednesday 10 September 2025 01:09:35 +0000 (0:00:01.205) 0:08:53.835 *** 2025-09-10 01:09:42.938887 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.938895 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.938903 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.938910 | orchestrator | 2025-09-10 01:09:42.938918 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-10 01:09:42.938926 | orchestrator | 2025-09-10 01:09:42.938939 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-10 01:09:42.938947 | orchestrator | Wednesday 10 September 2025 01:09:36 +0000 (0:00:00.587) 0:08:54.422 *** 2025-09-10 01:09:42.938954 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-10 01:09:42.938962 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-10 01:09:42.938970 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-10 01:09:42.938982 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-10 01:09:42.938990 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-10 01:09:42.938998 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-10 01:09:42.939006 | orchestrator | skipping: [testbed-node-3] 2025-09-10 01:09:42.939013 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-10 01:09:42.939021 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-10 01:09:42.939029 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-10 01:09:42.939036 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-10 01:09:42.939044 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-10 01:09:42.939052 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-10 01:09:42.939060 | orchestrator | skipping: [testbed-node-4] 2025-09-10 01:09:42.939067 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-10 01:09:42.939075 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-10 01:09:42.939083 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-10 01:09:42.939090 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-10 01:09:42.939098 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-10 01:09:42.939106 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-10 01:09:42.939113 | orchestrator | skipping: [testbed-node-5] 2025-09-10 01:09:42.939121 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-10 01:09:42.939129 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-10 01:09:42.939136 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-10 01:09:42.939144 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-10 01:09:42.939152 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-10 01:09:42.939159 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-10 01:09:42.939167 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.939175 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-10 01:09:42.939183 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-10 01:09:42.939190 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-10 01:09:42.939198 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-10 01:09:42.939206 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-10 01:09:42.939213 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-10 01:09:42.939226 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.939233 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-10 01:09:42.939241 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-10 01:09:42.939249 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-10 01:09:42.939257 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-10 01:09:42.939264 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-10 01:09:42.939272 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-10 01:09:42.939280 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.939287 | orchestrator | 2025-09-10 01:09:42.939295 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-10 01:09:42.939303 | orchestrator | 2025-09-10 01:09:42.939310 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-10 01:09:42.939318 | orchestrator | Wednesday 10 September 2025 01:09:37 +0000 (0:00:01.287) 0:08:55.710 *** 2025-09-10 01:09:42.939326 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-10 01:09:42.939334 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-10 01:09:42.939347 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.939354 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-10 01:09:42.939362 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-10 01:09:42.939370 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.939377 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-10 01:09:42.939385 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-10 01:09:42.939393 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.939400 | orchestrator | 2025-09-10 01:09:42.939408 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-10 01:09:42.939415 | orchestrator | 2025-09-10 01:09:42.939423 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-10 01:09:42.939431 | orchestrator | Wednesday 10 September 2025 01:09:38 +0000 (0:00:00.769) 0:08:56.480 *** 2025-09-10 01:09:42.939439 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.939446 | orchestrator | 2025-09-10 01:09:42.939454 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-10 01:09:42.939462 | orchestrator | 2025-09-10 01:09:42.939469 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-10 01:09:42.939477 | orchestrator | Wednesday 10 September 2025 01:09:39 +0000 (0:00:00.754) 0:08:57.234 *** 2025-09-10 01:09:42.939497 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:09:42.939506 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:09:42.939513 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:09:42.939521 | orchestrator | 2025-09-10 01:09:42.939534 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:09:42.939542 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-10 01:09:42.939550 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-10 01:09:42.939558 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-10 01:09:42.939566 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-10 01:09:42.939573 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-10 01:09:42.939581 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-10 01:09:42.939589 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-10 01:09:42.939597 | orchestrator | 2025-09-10 01:09:42.939604 | orchestrator | 2025-09-10 01:09:42.939612 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:09:42.939620 | orchestrator | Wednesday 10 September 2025 01:09:39 +0000 (0:00:00.425) 0:08:57.660 *** 2025-09-10 01:09:42.939628 | orchestrator | =============================================================================== 2025-09-10 01:09:42.939636 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 34.10s 2025-09-10 01:09:42.939644 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.10s 2025-09-10 01:09:42.939652 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.62s 2025-09-10 01:09:42.939659 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.19s 2025-09-10 01:09:42.939667 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.35s 2025-09-10 01:09:42.939675 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.65s 2025-09-10 01:09:42.939688 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.10s 2025-09-10 01:09:42.939695 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.66s 2025-09-10 01:09:42.939703 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.17s 2025-09-10 01:09:42.939711 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.89s 2025-09-10 01:09:42.939723 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.99s 2025-09-10 01:09:42.939731 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.94s 2025-09-10 01:09:42.939738 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.85s 2025-09-10 01:09:42.939746 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.55s 2025-09-10 01:09:42.939754 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.93s 2025-09-10 01:09:42.939762 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.91s 2025-09-10 01:09:42.939769 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.53s 2025-09-10 01:09:42.939777 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.05s 2025-09-10 01:09:42.939785 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.00s 2025-09-10 01:09:42.939792 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.10s 2025-09-10 01:09:42.939800 | orchestrator | 2025-09-10 01:09:42 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:42.939808 | orchestrator | 2025-09-10 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:45.985649 | orchestrator | 2025-09-10 01:09:45 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:45.985752 | orchestrator | 2025-09-10 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:49.028118 | orchestrator | 2025-09-10 01:09:49 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:49.028216 | orchestrator | 2025-09-10 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:52.074858 | orchestrator | 2025-09-10 01:09:52 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:52.074957 | orchestrator | 2025-09-10 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:55.117297 | orchestrator | 2025-09-10 01:09:55 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:55.117390 | orchestrator | 2025-09-10 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:09:58.158064 | orchestrator | 2025-09-10 01:09:58 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:09:58.158170 | orchestrator | 2025-09-10 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:01.201921 | orchestrator | 2025-09-10 01:10:01 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:01.202117 | orchestrator | 2025-09-10 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:04.245937 | orchestrator | 2025-09-10 01:10:04 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:04.246072 | orchestrator | 2025-09-10 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:07.293352 | orchestrator | 2025-09-10 01:10:07 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:07.293444 | orchestrator | 2025-09-10 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:10.340957 | orchestrator | 2025-09-10 01:10:10 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:10.341070 | orchestrator | 2025-09-10 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:13.384995 | orchestrator | 2025-09-10 01:10:13 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:13.385085 | orchestrator | 2025-09-10 01:10:13 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:16.429199 | orchestrator | 2025-09-10 01:10:16 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:16.429310 | orchestrator | 2025-09-10 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:19.475997 | orchestrator | 2025-09-10 01:10:19 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:19.476105 | orchestrator | 2025-09-10 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:22.529180 | orchestrator | 2025-09-10 01:10:22 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:22.529277 | orchestrator | 2025-09-10 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:25.573004 | orchestrator | 2025-09-10 01:10:25 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:25.573079 | orchestrator | 2025-09-10 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:28.605406 | orchestrator | 2025-09-10 01:10:28 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:28.606115 | orchestrator | 2025-09-10 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:31.643590 | orchestrator | 2025-09-10 01:10:31 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:31.643702 | orchestrator | 2025-09-10 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:34.681404 | orchestrator | 2025-09-10 01:10:34 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:34.682243 | orchestrator | 2025-09-10 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:37.720417 | orchestrator | 2025-09-10 01:10:37 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:37.720549 | orchestrator | 2025-09-10 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:40.768076 | orchestrator | 2025-09-10 01:10:40 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:40.768190 | orchestrator | 2025-09-10 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:43.809672 | orchestrator | 2025-09-10 01:10:43 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:43.809779 | orchestrator | 2025-09-10 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:46.857816 | orchestrator | 2025-09-10 01:10:46 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:46.857920 | orchestrator | 2025-09-10 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:49.898485 | orchestrator | 2025-09-10 01:10:49 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:49.898614 | orchestrator | 2025-09-10 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:52.953911 | orchestrator | 2025-09-10 01:10:52 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:52.954063 | orchestrator | 2025-09-10 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:56.011175 | orchestrator | 2025-09-10 01:10:56 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:56.011286 | orchestrator | 2025-09-10 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:10:59.046493 | orchestrator | 2025-09-10 01:10:59 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:10:59.046639 | orchestrator | 2025-09-10 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:02.091968 | orchestrator | 2025-09-10 01:11:02 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:02.092081 | orchestrator | 2025-09-10 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:05.125847 | orchestrator | 2025-09-10 01:11:05 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:05.125949 | orchestrator | 2025-09-10 01:11:05 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:08.168806 | orchestrator | 2025-09-10 01:11:08 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:08.168912 | orchestrator | 2025-09-10 01:11:08 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:11.214284 | orchestrator | 2025-09-10 01:11:11 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:11.214433 | orchestrator | 2025-09-10 01:11:11 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:14.251935 | orchestrator | 2025-09-10 01:11:14 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:14.252048 | orchestrator | 2025-09-10 01:11:14 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:17.303249 | orchestrator | 2025-09-10 01:11:17 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:17.303357 | orchestrator | 2025-09-10 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:20.349684 | orchestrator | 2025-09-10 01:11:20 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:20.349793 | orchestrator | 2025-09-10 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:23.394467 | orchestrator | 2025-09-10 01:11:23 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:23.394591 | orchestrator | 2025-09-10 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:26.436781 | orchestrator | 2025-09-10 01:11:26 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:26.436890 | orchestrator | 2025-09-10 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:29.479123 | orchestrator | 2025-09-10 01:11:29 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:29.479232 | orchestrator | 2025-09-10 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:32.521677 | orchestrator | 2025-09-10 01:11:32 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:32.521788 | orchestrator | 2025-09-10 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:35.570926 | orchestrator | 2025-09-10 01:11:35 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:35.571031 | orchestrator | 2025-09-10 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:38.613263 | orchestrator | 2025-09-10 01:11:38 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:38.613371 | orchestrator | 2025-09-10 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:41.657169 | orchestrator | 2025-09-10 01:11:41 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:41.657260 | orchestrator | 2025-09-10 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:44.698794 | orchestrator | 2025-09-10 01:11:44 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:44.698903 | orchestrator | 2025-09-10 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:47.743373 | orchestrator | 2025-09-10 01:11:47 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:47.743479 | orchestrator | 2025-09-10 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:50.777967 | orchestrator | 2025-09-10 01:11:50 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:50.778121 | orchestrator | 2025-09-10 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:53.826165 | orchestrator | 2025-09-10 01:11:53 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:53.826269 | orchestrator | 2025-09-10 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:56.871045 | orchestrator | 2025-09-10 01:11:56 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:56.871156 | orchestrator | 2025-09-10 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:11:59.915433 | orchestrator | 2025-09-10 01:11:59 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:11:59.915542 | orchestrator | 2025-09-10 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:02.960188 | orchestrator | 2025-09-10 01:12:02 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:02.960302 | orchestrator | 2025-09-10 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:06.010302 | orchestrator | 2025-09-10 01:12:06 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:06.010409 | orchestrator | 2025-09-10 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:09.053240 | orchestrator | 2025-09-10 01:12:09 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:09.053346 | orchestrator | 2025-09-10 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:12.100970 | orchestrator | 2025-09-10 01:12:12 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:12.101081 | orchestrator | 2025-09-10 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:15.148549 | orchestrator | 2025-09-10 01:12:15 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:15.148705 | orchestrator | 2025-09-10 01:12:15 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:18.194738 | orchestrator | 2025-09-10 01:12:18 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:18.194843 | orchestrator | 2025-09-10 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:21.237389 | orchestrator | 2025-09-10 01:12:21 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:21.237497 | orchestrator | 2025-09-10 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:24.290976 | orchestrator | 2025-09-10 01:12:24 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:24.291092 | orchestrator | 2025-09-10 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:27.332403 | orchestrator | 2025-09-10 01:12:27 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:27.332517 | orchestrator | 2025-09-10 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:30.383844 | orchestrator | 2025-09-10 01:12:30 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:30.383952 | orchestrator | 2025-09-10 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:33.424303 | orchestrator | 2025-09-10 01:12:33 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:33.424415 | orchestrator | 2025-09-10 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:36.463827 | orchestrator | 2025-09-10 01:12:36 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:36.463948 | orchestrator | 2025-09-10 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:39.511013 | orchestrator | 2025-09-10 01:12:39 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:39.511124 | orchestrator | 2025-09-10 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:42.554144 | orchestrator | 2025-09-10 01:12:42 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:42.554254 | orchestrator | 2025-09-10 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:45.590859 | orchestrator | 2025-09-10 01:12:45 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:45.590941 | orchestrator | 2025-09-10 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:48.634402 | orchestrator | 2025-09-10 01:12:48 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:48.634504 | orchestrator | 2025-09-10 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:51.681558 | orchestrator | 2025-09-10 01:12:51 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state STARTED 2025-09-10 01:12:51.681720 | orchestrator | 2025-09-10 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-09-10 01:12:54.728243 | orchestrator | 2025-09-10 01:12:54 | INFO  | Task 917bbdb6-2035-4f1b-ac78-71ec915cfe59 is in state SUCCESS 2025-09-10 01:12:54.729972 | orchestrator | 2025-09-10 01:12:54.730063 | orchestrator | 2025-09-10 01:12:54.730078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-10 01:12:54.730090 | orchestrator | 2025-09-10 01:12:54.730101 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-10 01:12:54.730113 | orchestrator | Wednesday 10 September 2025 01:07:57 +0000 (0:00:00.293) 0:00:00.293 *** 2025-09-10 01:12:54.730124 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.730136 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:12:54.730147 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:12:54.730158 | orchestrator | 2025-09-10 01:12:54.730169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-10 01:12:54.730180 | orchestrator | Wednesday 10 September 2025 01:07:58 +0000 (0:00:00.340) 0:00:00.634 *** 2025-09-10 01:12:54.730191 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-10 01:12:54.730202 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-10 01:12:54.730213 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-10 01:12:54.730224 | orchestrator | 2025-09-10 01:12:54.730234 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-10 01:12:54.730245 | orchestrator | 2025-09-10 01:12:54.730255 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-10 01:12:54.730266 | orchestrator | Wednesday 10 September 2025 01:07:58 +0000 (0:00:00.458) 0:00:01.093 *** 2025-09-10 01:12:54.730454 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:12:54.730470 | orchestrator | 2025-09-10 01:12:54.730481 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-10 01:12:54.730492 | orchestrator | Wednesday 10 September 2025 01:07:59 +0000 (0:00:00.604) 0:00:01.697 *** 2025-09-10 01:12:54.730504 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-10 01:12:54.730648 | orchestrator | 2025-09-10 01:12:54.730668 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-10 01:12:54.730681 | orchestrator | Wednesday 10 September 2025 01:08:02 +0000 (0:00:03.594) 0:00:05.291 *** 2025-09-10 01:12:54.730693 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-10 01:12:54.730706 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-10 01:12:54.730719 | orchestrator | 2025-09-10 01:12:54.730731 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-10 01:12:54.730744 | orchestrator | Wednesday 10 September 2025 01:08:09 +0000 (0:00:06.738) 0:00:12.030 *** 2025-09-10 01:12:54.730756 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-10 01:12:54.730769 | orchestrator | 2025-09-10 01:12:54.730796 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-10 01:12:54.730809 | orchestrator | Wednesday 10 September 2025 01:08:13 +0000 (0:00:03.499) 0:00:15.529 *** 2025-09-10 01:12:54.730821 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-10 01:12:54.730834 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-10 01:12:54.730847 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-10 01:12:54.730859 | orchestrator | 2025-09-10 01:12:54.730872 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-10 01:12:54.730885 | orchestrator | Wednesday 10 September 2025 01:08:21 +0000 (0:00:08.336) 0:00:23.866 *** 2025-09-10 01:12:54.730898 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-10 01:12:54.730910 | orchestrator | 2025-09-10 01:12:54.730923 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-10 01:12:54.730936 | orchestrator | Wednesday 10 September 2025 01:08:25 +0000 (0:00:03.656) 0:00:27.522 *** 2025-09-10 01:12:54.730948 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-10 01:12:54.730958 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-10 01:12:54.730969 | orchestrator | 2025-09-10 01:12:54.730979 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-10 01:12:54.730990 | orchestrator | Wednesday 10 September 2025 01:08:33 +0000 (0:00:08.161) 0:00:35.684 *** 2025-09-10 01:12:54.731024 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-10 01:12:54.731035 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-10 01:12:54.731046 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-10 01:12:54.731056 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-10 01:12:54.731067 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-10 01:12:54.731078 | orchestrator | 2025-09-10 01:12:54.731088 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-10 01:12:54.731099 | orchestrator | Wednesday 10 September 2025 01:08:49 +0000 (0:00:16.239) 0:00:51.923 *** 2025-09-10 01:12:54.731110 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:12:54.731120 | orchestrator | 2025-09-10 01:12:54.731131 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-10 01:12:54.731143 | orchestrator | Wednesday 10 September 2025 01:08:50 +0000 (0:00:00.636) 0:00:52.560 *** 2025-09-10 01:12:54.731165 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731176 | orchestrator | 2025-09-10 01:12:54.731187 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-10 01:12:54.731197 | orchestrator | Wednesday 10 September 2025 01:08:54 +0000 (0:00:04.630) 0:00:57.190 *** 2025-09-10 01:12:54.731208 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731219 | orchestrator | 2025-09-10 01:12:54.731230 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-10 01:12:54.731256 | orchestrator | Wednesday 10 September 2025 01:08:59 +0000 (0:00:05.066) 0:01:02.257 *** 2025-09-10 01:12:54.731268 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.731278 | orchestrator | 2025-09-10 01:12:54.731289 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-10 01:12:54.731300 | orchestrator | Wednesday 10 September 2025 01:09:03 +0000 (0:00:03.494) 0:01:05.751 *** 2025-09-10 01:12:54.731310 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-10 01:12:54.731321 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-10 01:12:54.731332 | orchestrator | 2025-09-10 01:12:54.731342 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-10 01:12:54.731353 | orchestrator | Wednesday 10 September 2025 01:09:13 +0000 (0:00:10.617) 0:01:16.368 *** 2025-09-10 01:12:54.731364 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-10 01:12:54.731375 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-10 01:12:54.731388 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-10 01:12:54.731400 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-10 01:12:54.731411 | orchestrator | 2025-09-10 01:12:54.731422 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-10 01:12:54.731432 | orchestrator | Wednesday 10 September 2025 01:09:29 +0000 (0:00:15.241) 0:01:31.609 *** 2025-09-10 01:12:54.731443 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731454 | orchestrator | 2025-09-10 01:12:54.731465 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-10 01:12:54.731475 | orchestrator | Wednesday 10 September 2025 01:09:33 +0000 (0:00:04.738) 0:01:36.348 *** 2025-09-10 01:12:54.731486 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731497 | orchestrator | 2025-09-10 01:12:54.731507 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-10 01:12:54.731518 | orchestrator | Wednesday 10 September 2025 01:09:39 +0000 (0:00:05.267) 0:01:41.616 *** 2025-09-10 01:12:54.731529 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.731540 | orchestrator | 2025-09-10 01:12:54.731550 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-10 01:12:54.731561 | orchestrator | Wednesday 10 September 2025 01:09:39 +0000 (0:00:00.230) 0:01:41.846 *** 2025-09-10 01:12:54.731572 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731583 | orchestrator | 2025-09-10 01:12:54.731594 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-10 01:12:54.731605 | orchestrator | Wednesday 10 September 2025 01:09:44 +0000 (0:00:05.450) 0:01:47.297 *** 2025-09-10 01:12:54.731615 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:12:54.731654 | orchestrator | 2025-09-10 01:12:54.731665 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-10 01:12:54.731676 | orchestrator | Wednesday 10 September 2025 01:09:45 +0000 (0:00:01.070) 0:01:48.368 *** 2025-09-10 01:12:54.731687 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.731698 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.731716 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731727 | orchestrator | 2025-09-10 01:12:54.731738 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-10 01:12:54.731749 | orchestrator | Wednesday 10 September 2025 01:09:51 +0000 (0:00:05.186) 0:01:53.554 *** 2025-09-10 01:12:54.731764 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.731776 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.731786 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731797 | orchestrator | 2025-09-10 01:12:54.731808 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-10 01:12:54.731819 | orchestrator | Wednesday 10 September 2025 01:09:55 +0000 (0:00:04.288) 0:01:57.843 *** 2025-09-10 01:12:54.731830 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731841 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.731851 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.731862 | orchestrator | 2025-09-10 01:12:54.731873 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-10 01:12:54.731884 | orchestrator | Wednesday 10 September 2025 01:09:56 +0000 (0:00:00.836) 0:01:58.679 *** 2025-09-10 01:12:54.731894 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:12:54.731905 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.731916 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:12:54.731927 | orchestrator | 2025-09-10 01:12:54.731937 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-10 01:12:54.731948 | orchestrator | Wednesday 10 September 2025 01:09:58 +0000 (0:00:02.069) 0:02:00.749 *** 2025-09-10 01:12:54.731959 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.731969 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.731980 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.731991 | orchestrator | 2025-09-10 01:12:54.732002 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-10 01:12:54.732012 | orchestrator | Wednesday 10 September 2025 01:09:59 +0000 (0:00:01.307) 0:02:02.056 *** 2025-09-10 01:12:54.732023 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.732034 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.732085 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.732097 | orchestrator | 2025-09-10 01:12:54.732107 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-10 01:12:54.732119 | orchestrator | Wednesday 10 September 2025 01:10:00 +0000 (0:00:01.315) 0:02:03.371 *** 2025-09-10 01:12:54.732129 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.732140 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.732151 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.732162 | orchestrator | 2025-09-10 01:12:54.732180 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-10 01:12:54.732191 | orchestrator | Wednesday 10 September 2025 01:10:02 +0000 (0:00:02.043) 0:02:05.415 *** 2025-09-10 01:12:54.732202 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.732213 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.732224 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.732234 | orchestrator | 2025-09-10 01:12:54.732245 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-10 01:12:54.732256 | orchestrator | Wednesday 10 September 2025 01:10:04 +0000 (0:00:01.500) 0:02:06.916 *** 2025-09-10 01:12:54.732267 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.732277 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:12:54.732288 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:12:54.732299 | orchestrator | 2025-09-10 01:12:54.732310 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-10 01:12:54.732320 | orchestrator | Wednesday 10 September 2025 01:10:05 +0000 (0:00:00.907) 0:02:07.824 *** 2025-09-10 01:12:54.732331 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:12:54.732342 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:12:54.732352 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.732370 | orchestrator | 2025-09-10 01:12:54.732381 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-10 01:12:54.732392 | orchestrator | Wednesday 10 September 2025 01:10:08 +0000 (0:00:02.762) 0:02:10.586 *** 2025-09-10 01:12:54.732403 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:12:54.732414 | orchestrator | 2025-09-10 01:12:54.732424 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-10 01:12:54.732435 | orchestrator | Wednesday 10 September 2025 01:10:08 +0000 (0:00:00.517) 0:02:11.104 *** 2025-09-10 01:12:54.732446 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.732456 | orchestrator | 2025-09-10 01:12:54.732467 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-10 01:12:54.732478 | orchestrator | Wednesday 10 September 2025 01:10:13 +0000 (0:00:04.395) 0:02:15.499 *** 2025-09-10 01:12:54.732488 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.732499 | orchestrator | 2025-09-10 01:12:54.732510 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-10 01:12:54.732520 | orchestrator | Wednesday 10 September 2025 01:10:16 +0000 (0:00:03.173) 0:02:18.672 *** 2025-09-10 01:12:54.732531 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-10 01:12:54.732542 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-10 01:12:54.732553 | orchestrator | 2025-09-10 01:12:54.732563 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-10 01:12:54.732574 | orchestrator | Wednesday 10 September 2025 01:10:24 +0000 (0:00:07.829) 0:02:26.501 *** 2025-09-10 01:12:54.732591 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.732602 | orchestrator | 2025-09-10 01:12:54.732612 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-10 01:12:54.732672 | orchestrator | Wednesday 10 September 2025 01:10:27 +0000 (0:00:03.343) 0:02:29.844 *** 2025-09-10 01:12:54.732685 | orchestrator | ok: [testbed-node-0] 2025-09-10 01:12:54.732696 | orchestrator | ok: [testbed-node-1] 2025-09-10 01:12:54.732706 | orchestrator | ok: [testbed-node-2] 2025-09-10 01:12:54.732717 | orchestrator | 2025-09-10 01:12:54.732728 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-10 01:12:54.732738 | orchestrator | Wednesday 10 September 2025 01:10:27 +0000 (0:00:00.328) 0:02:30.173 *** 2025-09-10 01:12:54.732752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.732775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.732796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.732809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.732826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.732838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.732850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.732980 | orchestrator | 2025-09-10 01:12:54.732992 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-10 01:12:54.733003 | orchestrator | Wednesday 10 September 2025 01:10:30 +0000 (0:00:02.537) 0:02:32.710 *** 2025-09-10 01:12:54.733013 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.733024 | orchestrator | 2025-09-10 01:12:54.733041 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-10 01:12:54.733052 | orchestrator | Wednesday 10 September 2025 01:10:30 +0000 (0:00:00.148) 0:02:32.859 *** 2025-09-10 01:12:54.733062 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.733073 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:12:54.733084 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:12:54.733094 | orchestrator | 2025-09-10 01:12:54.733105 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-10 01:12:54.733115 | orchestrator | Wednesday 10 September 2025 01:10:30 +0000 (0:00:00.516) 0:02:33.376 *** 2025-09-10 01:12:54.733127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.733139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.733155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.733166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.733188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.733199 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.733216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.733226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.733236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.733257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.733267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.733283 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:12:54.733294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.733312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.733322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.733332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.733347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.733357 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:12:54.733366 | orchestrator | 2025-09-10 01:12:54.733376 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-10 01:12:54.733386 | orchestrator | Wednesday 10 September 2025 01:10:31 +0000 (0:00:00.660) 0:02:34.036 *** 2025-09-10 01:12:54.733396 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-10 01:12:54.733405 | orchestrator | 2025-09-10 01:12:54.733415 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-10 01:12:54.733430 | orchestrator | Wednesday 10 September 2025 01:10:32 +0000 (0:00:00.560) 0:02:34.597 *** 2025-09-10 01:12:54.733440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.733955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.733978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.733988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.734006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.734099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.734124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734241 | orchestrator | 2025-09-10 01:12:54.734251 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-10 01:12:54.734261 | orchestrator | Wednesday 10 September 2025 01:10:37 +0000 (0:00:05.392) 0:02:39.990 *** 2025-09-10 01:12:54.734271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.734281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.734295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.734331 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.734346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.734356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.734366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.734431 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:12:54.734441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.734456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.734466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.734502 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:12:54.734511 | orchestrator | 2025-09-10 01:12:54.734526 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-10 01:12:54.734536 | orchestrator | Wednesday 10 September 2025 01:10:38 +0000 (0:00:00.919) 0:02:40.909 *** 2025-09-10 01:12:54.734547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.734559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.734571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.734617 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.734658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.734670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.734682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.734724 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:12:54.734735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-10 01:12:54.734753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-10 01:12:54.734768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-10 01:12:54.734791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-10 01:12:54.734802 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:12:54.734813 | orchestrator | 2025-09-10 01:12:54.734824 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-10 01:12:54.734835 | orchestrator | Wednesday 10 September 2025 01:10:39 +0000 (0:00:00.875) 0:02:41.785 *** 2025-09-10 01:12:54.734854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.734873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.734889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.734901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.734912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.734922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.734937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.734998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735050 | orchestrator | 2025-09-10 01:12:54.735060 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-10 01:12:54.735069 | orchestrator | Wednesday 10 September 2025 01:10:44 +0000 (0:00:05.142) 0:02:46.927 *** 2025-09-10 01:12:54.735079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-10 01:12:54.735089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-10 01:12:54.735098 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-10 01:12:54.735108 | orchestrator | 2025-09-10 01:12:54.735117 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-10 01:12:54.735131 | orchestrator | Wednesday 10 September 2025 01:10:46 +0000 (0:00:02.061) 0:02:48.988 *** 2025-09-10 01:12:54.735141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.735151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.735168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.735184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.735194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.735208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.735218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.735323 | orchestrator | 2025-09-10 01:12:54.735333 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-10 01:12:54.735342 | orchestrator | Wednesday 10 September 2025 01:11:02 +0000 (0:00:16.409) 0:03:05.398 *** 2025-09-10 01:12:54.735359 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.735368 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.735378 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.735387 | orchestrator | 2025-09-10 01:12:54.735397 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-10 01:12:54.735406 | orchestrator | Wednesday 10 September 2025 01:11:04 +0000 (0:00:01.450) 0:03:06.848 *** 2025-09-10 01:12:54.735416 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735425 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735439 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735449 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735458 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735468 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735478 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735487 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735496 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735506 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735515 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735524 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735534 | orchestrator | 2025-09-10 01:12:54.735543 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-10 01:12:54.735553 | orchestrator | Wednesday 10 September 2025 01:11:09 +0000 (0:00:05.094) 0:03:11.942 *** 2025-09-10 01:12:54.735562 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735571 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735581 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735590 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735599 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735608 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735618 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735679 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735689 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735698 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735708 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735717 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735726 | orchestrator | 2025-09-10 01:12:54.735736 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-10 01:12:54.735745 | orchestrator | Wednesday 10 September 2025 01:11:14 +0000 (0:00:05.210) 0:03:17.153 *** 2025-09-10 01:12:54.735754 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735769 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735778 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-10 01:12:54.735788 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735798 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735807 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-10 01:12:54.735816 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735825 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735845 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-10 01:12:54.735855 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735864 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735873 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-10 01:12:54.735883 | orchestrator | 2025-09-10 01:12:54.735892 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-10 01:12:54.735902 | orchestrator | Wednesday 10 September 2025 01:11:20 +0000 (0:00:05.470) 0:03:22.623 *** 2025-09-10 01:12:54.735912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.735929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.735940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-10 01:12:54.735955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.735965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.735981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-10 01:12:54.735991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-10 01:12:54.736102 | orchestrator | 2025-09-10 01:12:54.736112 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-10 01:12:54.736122 | orchestrator | Wednesday 10 September 2025 01:11:23 +0000 (0:00:03.726) 0:03:26.350 *** 2025-09-10 01:12:54.736131 | orchestrator | skipping: [testbed-node-0] 2025-09-10 01:12:54.736141 | orchestrator | skipping: [testbed-node-1] 2025-09-10 01:12:54.736150 | orchestrator | skipping: [testbed-node-2] 2025-09-10 01:12:54.736160 | orchestrator | 2025-09-10 01:12:54.736169 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-10 01:12:54.736178 | orchestrator | Wednesday 10 September 2025 01:11:24 +0000 (0:00:00.320) 0:03:26.670 *** 2025-09-10 01:12:54.736188 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736196 | orchestrator | 2025-09-10 01:12:54.736204 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-10 01:12:54.736212 | orchestrator | Wednesday 10 September 2025 01:11:26 +0000 (0:00:02.052) 0:03:28.722 *** 2025-09-10 01:12:54.736220 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736227 | orchestrator | 2025-09-10 01:12:54.736235 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-10 01:12:54.736243 | orchestrator | Wednesday 10 September 2025 01:11:28 +0000 (0:00:02.094) 0:03:30.816 *** 2025-09-10 01:12:54.736251 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736258 | orchestrator | 2025-09-10 01:12:54.736266 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-10 01:12:54.736279 | orchestrator | Wednesday 10 September 2025 01:11:30 +0000 (0:00:02.191) 0:03:33.007 *** 2025-09-10 01:12:54.736287 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736295 | orchestrator | 2025-09-10 01:12:54.736303 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-10 01:12:54.736310 | orchestrator | Wednesday 10 September 2025 01:11:32 +0000 (0:00:02.155) 0:03:35.163 *** 2025-09-10 01:12:54.736318 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736326 | orchestrator | 2025-09-10 01:12:54.736333 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-10 01:12:54.736341 | orchestrator | Wednesday 10 September 2025 01:11:54 +0000 (0:00:21.399) 0:03:56.563 *** 2025-09-10 01:12:54.736349 | orchestrator | 2025-09-10 01:12:54.736357 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-10 01:12:54.736364 | orchestrator | Wednesday 10 September 2025 01:11:54 +0000 (0:00:00.068) 0:03:56.632 *** 2025-09-10 01:12:54.736372 | orchestrator | 2025-09-10 01:12:54.736380 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-10 01:12:54.736392 | orchestrator | Wednesday 10 September 2025 01:11:54 +0000 (0:00:00.067) 0:03:56.700 *** 2025-09-10 01:12:54.736400 | orchestrator | 2025-09-10 01:12:54.736408 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-10 01:12:54.736415 | orchestrator | Wednesday 10 September 2025 01:11:54 +0000 (0:00:00.064) 0:03:56.764 *** 2025-09-10 01:12:54.736423 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736431 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.736438 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.736446 | orchestrator | 2025-09-10 01:12:54.736454 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-10 01:12:54.736461 | orchestrator | Wednesday 10 September 2025 01:12:11 +0000 (0:00:17.020) 0:04:13.784 *** 2025-09-10 01:12:54.736469 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.736477 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736484 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.736492 | orchestrator | 2025-09-10 01:12:54.736500 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-10 01:12:54.736507 | orchestrator | Wednesday 10 September 2025 01:12:22 +0000 (0:00:11.481) 0:04:25.266 *** 2025-09-10 01:12:54.736515 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736523 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.736530 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.736538 | orchestrator | 2025-09-10 01:12:54.736546 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-10 01:12:54.736553 | orchestrator | Wednesday 10 September 2025 01:12:33 +0000 (0:00:10.591) 0:04:35.858 *** 2025-09-10 01:12:54.736561 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736569 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.736576 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.736584 | orchestrator | 2025-09-10 01:12:54.736591 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-10 01:12:54.736599 | orchestrator | Wednesday 10 September 2025 01:12:43 +0000 (0:00:10.519) 0:04:46.377 *** 2025-09-10 01:12:54.736607 | orchestrator | changed: [testbed-node-1] 2025-09-10 01:12:54.736614 | orchestrator | changed: [testbed-node-2] 2025-09-10 01:12:54.736635 | orchestrator | changed: [testbed-node-0] 2025-09-10 01:12:54.736644 | orchestrator | 2025-09-10 01:12:54.736652 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-10 01:12:54.736660 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-10 01:12:54.736668 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 01:12:54.736676 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-10 01:12:54.736689 | orchestrator | 2025-09-10 01:12:54.736697 | orchestrator | 2025-09-10 01:12:54.736704 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-10 01:12:54.736712 | orchestrator | Wednesday 10 September 2025 01:12:52 +0000 (0:00:08.341) 0:04:54.719 *** 2025-09-10 01:12:54.736724 | orchestrator | =============================================================================== 2025-09-10 01:12:54.736732 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.40s 2025-09-10 01:12:54.736740 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.02s 2025-09-10 01:12:54.736747 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.41s 2025-09-10 01:12:54.736755 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.24s 2025-09-10 01:12:54.736763 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.24s 2025-09-10 01:12:54.736770 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.48s 2025-09-10 01:12:54.736778 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.62s 2025-09-10 01:12:54.736786 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.59s 2025-09-10 01:12:54.736793 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.52s 2025-09-10 01:12:54.736801 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.34s 2025-09-10 01:12:54.736809 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.34s 2025-09-10 01:12:54.736817 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.16s 2025-09-10 01:12:54.736824 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.83s 2025-09-10 01:12:54.736832 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.74s 2025-09-10 01:12:54.736840 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.47s 2025-09-10 01:12:54.736847 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.45s 2025-09-10 01:12:54.736855 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.39s 2025-09-10 01:12:54.736863 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.27s 2025-09-10 01:12:54.736871 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.21s 2025-09-10 01:12:54.736878 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.19s 2025-09-10 01:12:54.736886 | orchestrator | 2025-09-10 01:12:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:12:57.774333 | orchestrator | 2025-09-10 01:12:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:00.812094 | orchestrator | 2025-09-10 01:13:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:03.855817 | orchestrator | 2025-09-10 01:13:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:06.897367 | orchestrator | 2025-09-10 01:13:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:09.946152 | orchestrator | 2025-09-10 01:13:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:12.982360 | orchestrator | 2025-09-10 01:13:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:16.036918 | orchestrator | 2025-09-10 01:13:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:19.082470 | orchestrator | 2025-09-10 01:13:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:22.131976 | orchestrator | 2025-09-10 01:13:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:25.169795 | orchestrator | 2025-09-10 01:13:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:28.213013 | orchestrator | 2025-09-10 01:13:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:31.261769 | orchestrator | 2025-09-10 01:13:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:34.302347 | orchestrator | 2025-09-10 01:13:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:37.349036 | orchestrator | 2025-09-10 01:13:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:40.393600 | orchestrator | 2025-09-10 01:13:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:43.432470 | orchestrator | 2025-09-10 01:13:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:46.479845 | orchestrator | 2025-09-10 01:13:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:49.524801 | orchestrator | 2025-09-10 01:13:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:52.563925 | orchestrator | 2025-09-10 01:13:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-10 01:13:55.611482 | orchestrator | 2025-09-10 01:13:55.927019 | orchestrator | 2025-09-10 01:13:55.931064 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Sep 10 01:13:55 UTC 2025 2025-09-10 01:13:55.931107 | orchestrator | 2025-09-10 01:13:56.343573 | orchestrator | ok: Runtime: 0:34:32.742109 2025-09-10 01:13:56.591807 | 2025-09-10 01:13:56.591958 | TASK [Bootstrap services] 2025-09-10 01:13:57.315730 | orchestrator | 2025-09-10 01:13:57.315922 | orchestrator | # BOOTSTRAP 2025-09-10 01:13:57.315946 | orchestrator | 2025-09-10 01:13:57.315960 | orchestrator | + set -e 2025-09-10 01:13:57.315974 | orchestrator | + echo 2025-09-10 01:13:57.315987 | orchestrator | + echo '# BOOTSTRAP' 2025-09-10 01:13:57.316006 | orchestrator | + echo 2025-09-10 01:13:57.316051 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-10 01:13:57.324636 | orchestrator | + set -e 2025-09-10 01:13:57.324719 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-10 01:14:01.811377 | orchestrator | 2025-09-10 01:14:01 | INFO  | It takes a moment until task 76fd02f0-aaec-4cef-a0d1-03e7ee1db847 (flavor-manager) has been started and output is visible here. 2025-09-10 01:14:05.540239 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-10 01:14:05.540335 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-10 01:14:05.540360 | orchestrator | │ in run │ 2025-09-10 01:14:05.540371 | orchestrator | │ │ 2025-09-10 01:14:05.540382 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-10 01:14:05.540403 | orchestrator | │ 192 │ │ 2025-09-10 01:14:05.540414 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-10 01:14:05.540425 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-10 01:14:05.540434 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-10 01:14:05.540444 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-10 01:14:05.540454 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-10 01:14:05.540463 | orchestrator | │ │ 2025-09-10 01:14:05.540474 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-10 01:14:05.540494 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-10 01:14:05.540504 | orchestrator | │ │ debug = False │ │ 2025-09-10 01:14:05.540514 | orchestrator | │ │ definitions = { │ │ 2025-09-10 01:14:05.540524 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-10 01:14:05.540533 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-10 01:14:05.540543 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-10 01:14:05.540553 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-10 01:14:05.540563 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-10 01:14:05.540572 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-10 01:14:05.540582 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-10 01:14:05.540592 | orchestrator | │ │ │ ], │ │ 2025-09-10 01:14:05.540602 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-10 01:14:05.540611 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.540621 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-10 01:14:05.540650 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.540660 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-10 01:14:05.540670 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.540679 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-10 01:14:05.540689 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.540730 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-10 01:14:05.540740 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-10 01:14:05.540750 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.540759 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.540769 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.540779 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-10 01:14:05.540788 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.540798 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-10 01:14:05.540807 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-10 01:14:05.540817 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-10 01:14:05.540843 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.540853 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-10 01:14:05.540863 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-10 01:14:05.540873 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.540882 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.540891 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.540901 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-10 01:14:05.540916 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.540926 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-10 01:14:05.540935 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.540945 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.540955 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.540965 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-10 01:14:05.540974 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-10 01:14:05.540984 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.540993 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.541003 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.541013 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-10 01:14:05.541022 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.541039 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-10 01:14:05.541049 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-10 01:14:05.541059 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.541068 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.541078 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-10 01:14:05.541087 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-10 01:14:05.541097 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.541106 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.541116 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.541125 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-10 01:14:05.541135 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.541144 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.541154 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.541163 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.541173 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.541182 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-10 01:14:05.541192 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-10 01:14:05.541201 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.541210 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.541220 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.541230 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-10 01:14:05.541239 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.541253 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.541263 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-10 01:14:05.541279 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.571258 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.571282 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-10 01:14:05.571292 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-10 01:14:05.571302 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.571312 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.571322 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.571332 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-10 01:14:05.571342 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.571394 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-10 01:14:05.571404 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.571414 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.571424 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.571433 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-10 01:14:05.571443 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-10 01:14:05.571453 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.571463 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.571472 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.571482 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-10 01:14:05.571492 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.571501 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-10 01:14:05.571511 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-10 01:14:05.571520 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.571530 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.571540 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-10 01:14:05.571550 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-10 01:14:05.571559 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.571569 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.571578 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.571588 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-10 01:14:05.571598 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-10 01:14:05.571608 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.571618 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.571628 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.571637 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.571647 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-10 01:14:05.571656 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-10 01:14:05.571666 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.571683 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.571714 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.571725 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-10 01:14:05.571734 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-10 01:14:05.571750 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.571760 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-10 01:14:05.571777 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.571788 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.571797 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-10 01:14:05.571807 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-10 01:14:05.571817 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.571827 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.571837 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-10 01:14:05.571847 | orchestrator | │ │ │ ] │ │ 2025-09-10 01:14:05.571856 | orchestrator | │ │ } │ │ 2025-09-10 01:14:05.571866 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-10 01:14:05.571876 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-10 01:14:05.571886 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-10 01:14:05.571896 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-10 01:14:05.571905 | orchestrator | │ │ name = 'local' │ │ 2025-09-10 01:14:05.571915 | orchestrator | │ │ recommended = True │ │ 2025-09-10 01:14:05.571925 | orchestrator | │ │ url = None │ │ 2025-09-10 01:14:05.571935 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-10 01:14:05.571947 | orchestrator | │ │ 2025-09-10 01:14:05.571957 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-10 01:14:05.571966 | orchestrator | │ in __init__ │ 2025-09-10 01:14:05.571976 | orchestrator | │ │ 2025-09-10 01:14:05.571986 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-10 01:14:05.571995 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-10 01:14:05.572005 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-10 01:14:05.572015 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-10 01:14:05.572025 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-10 01:14:05.572034 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-10 01:14:05.572044 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-10 01:14:05.572054 | orchestrator | │ │ 2025-09-10 01:14:05.572068 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-10 01:14:05.572084 | orchestrator | │ │ cloud = │ │ 2025-09-10 01:14:05.572104 | orchestrator | │ │ definitions = { │ │ 2025-09-10 01:14:05.572114 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-10 01:14:05.572124 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-10 01:14:05.572133 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-10 01:14:05.572143 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-10 01:14:05.572153 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-10 01:14:05.572162 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-10 01:14:05.572172 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-10 01:14:05.572182 | orchestrator | │ │ │ ], │ │ 2025-09-10 01:14:05.572192 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-10 01:14:05.572206 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.593921 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-10 01:14:05.593967 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.593980 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-10 01:14:05.593992 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.594003 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-10 01:14:05.594043 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594055 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-10 01:14:05.594064 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-10 01:14:05.594076 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594085 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594095 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594105 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-10 01:14:05.594114 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594123 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-10 01:14:05.594133 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-10 01:14:05.594143 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-10 01:14:05.594152 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594161 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-10 01:14:05.594171 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-10 01:14:05.594181 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594201 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594211 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594221 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-10 01:14:05.594231 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594241 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-10 01:14:05.594250 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.594260 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.594270 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594279 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-10 01:14:05.594289 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-10 01:14:05.594298 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594308 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594325 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594335 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-10 01:14:05.594345 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594355 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-10 01:14:05.594364 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-10 01:14:05.594374 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.594383 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594393 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-10 01:14:05.594403 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-10 01:14:05.594412 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594422 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594442 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594452 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-10 01:14:05.594462 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594471 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.594481 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.594491 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.594500 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594510 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-10 01:14:05.594519 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-10 01:14:05.594529 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594544 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594554 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594564 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-10 01:14:05.594573 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594583 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.594593 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-10 01:14:05.594602 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.594612 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594621 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-10 01:14:05.594631 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-10 01:14:05.594641 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594650 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594660 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594669 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-10 01:14:05.594679 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594689 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-10 01:14:05.594723 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.594733 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.594745 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594755 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-10 01:14:05.594764 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-10 01:14:05.594774 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594784 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.594794 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.594804 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-10 01:14:05.594814 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-10 01:14:05.594823 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-10 01:14:05.594833 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-10 01:14:05.594842 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.594852 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.594862 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-10 01:14:05.594871 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-10 01:14:05.594881 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.594901 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.674065 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.674125 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-10 01:14:05.674155 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-10 01:14:05.674165 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.674175 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-10 01:14:05.674185 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.674195 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.674204 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-10 01:14:05.674214 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-10 01:14:05.674223 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.674233 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.674243 | orchestrator | │ │ │ │ { │ │ 2025-09-10 01:14:05.674252 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-10 01:14:05.674261 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-10 01:14:05.674271 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-10 01:14:05.674281 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-10 01:14:05.674290 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-10 01:14:05.674299 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-10 01:14:05.674309 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-10 01:14:05.674319 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-10 01:14:05.674328 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-10 01:14:05.674338 | orchestrator | │ │ │ │ }, │ │ 2025-09-10 01:14:05.674348 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-10 01:14:05.674357 | orchestrator | │ │ │ ] │ │ 2025-09-10 01:14:05.674367 | orchestrator | │ │ } │ │ 2025-09-10 01:14:05.674376 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-10 01:14:05.674386 | orchestrator | │ │ recommended = True │ │ 2025-09-10 01:14:05.674395 | orchestrator | │ │ self = │ │ 2025-09-10 01:14:05.674415 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-10 01:14:05.674427 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-10 01:14:05.674450 | orchestrator | KeyError: 'recommended' 2025-09-10 01:14:06.135012 | orchestrator | ERROR 2025-09-10 01:14:06.135440 | orchestrator | { 2025-09-10 01:14:06.135568 | orchestrator | "delta": "0:00:09.068138", 2025-09-10 01:14:06.135637 | orchestrator | "end": "2025-09-10 01:14:06.003904", 2025-09-10 01:14:06.135705 | orchestrator | "msg": "non-zero return code", 2025-09-10 01:14:06.135833 | orchestrator | "rc": 1, 2025-09-10 01:14:06.135920 | orchestrator | "start": "2025-09-10 01:13:56.935766" 2025-09-10 01:14:06.136021 | orchestrator | } failure 2025-09-10 01:14:06.156712 | 2025-09-10 01:14:06.156896 | PLAY RECAP 2025-09-10 01:14:06.156979 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-10 01:14:06.157018 | 2025-09-10 01:14:06.361028 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-10 01:14:06.362117 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-10 01:14:07.097241 | 2025-09-10 01:14:07.097402 | PLAY [Post output play] 2025-09-10 01:14:07.113075 | 2025-09-10 01:14:07.113200 | LOOP [stage-output : Register sources] 2025-09-10 01:14:07.165042 | 2025-09-10 01:14:07.165243 | TASK [stage-output : Check sudo] 2025-09-10 01:14:07.991196 | orchestrator | sudo: a password is required 2025-09-10 01:14:08.199418 | orchestrator | ok: Runtime: 0:00:00.013047 2025-09-10 01:14:08.210695 | 2025-09-10 01:14:08.210898 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-10 01:14:08.251796 | 2025-09-10 01:14:08.252095 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-10 01:14:08.320617 | orchestrator | ok 2025-09-10 01:14:08.329221 | 2025-09-10 01:14:08.329342 | LOOP [stage-output : Ensure target folders exist] 2025-09-10 01:14:08.740401 | orchestrator | ok: "docs" 2025-09-10 01:14:08.740801 | 2025-09-10 01:14:08.961615 | orchestrator | ok: "artifacts" 2025-09-10 01:14:09.166217 | orchestrator | ok: "logs" 2025-09-10 01:14:09.190163 | 2025-09-10 01:14:09.190375 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-10 01:14:09.227510 | 2025-09-10 01:14:09.227860 | TASK [stage-output : Make all log files readable] 2025-09-10 01:14:09.484324 | orchestrator | ok 2025-09-10 01:14:09.493530 | 2025-09-10 01:14:09.493676 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-10 01:14:09.539467 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:09.556483 | 2025-09-10 01:14:09.556658 | TASK [stage-output : Discover log files for compression] 2025-09-10 01:14:09.581921 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:09.597168 | 2025-09-10 01:14:09.597352 | LOOP [stage-output : Archive everything from logs] 2025-09-10 01:14:09.641643 | 2025-09-10 01:14:09.641841 | PLAY [Post cleanup play] 2025-09-10 01:14:09.650102 | 2025-09-10 01:14:09.650207 | TASK [Set cloud fact (Zuul deployment)] 2025-09-10 01:14:09.708520 | orchestrator | ok 2025-09-10 01:14:09.721197 | 2025-09-10 01:14:09.721319 | TASK [Set cloud fact (local deployment)] 2025-09-10 01:14:09.755661 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:09.771547 | 2025-09-10 01:14:09.771698 | TASK [Clean the cloud environment] 2025-09-10 01:14:10.302171 | orchestrator | 2025-09-10 01:14:10 - clean up servers 2025-09-10 01:14:11.078364 | orchestrator | 2025-09-10 01:14:11 - testbed-manager 2025-09-10 01:14:11.165658 | orchestrator | 2025-09-10 01:14:11 - testbed-node-1 2025-09-10 01:14:11.257265 | orchestrator | 2025-09-10 01:14:11 - testbed-node-4 2025-09-10 01:14:11.357824 | orchestrator | 2025-09-10 01:14:11 - testbed-node-0 2025-09-10 01:14:11.449290 | orchestrator | 2025-09-10 01:14:11 - testbed-node-5 2025-09-10 01:14:11.566362 | orchestrator | 2025-09-10 01:14:11 - testbed-node-3 2025-09-10 01:14:11.666767 | orchestrator | 2025-09-10 01:14:11 - testbed-node-2 2025-09-10 01:14:11.766763 | orchestrator | 2025-09-10 01:14:11 - clean up keypairs 2025-09-10 01:14:11.788821 | orchestrator | 2025-09-10 01:14:11 - testbed 2025-09-10 01:14:11.816176 | orchestrator | 2025-09-10 01:14:11 - wait for servers to be gone 2025-09-10 01:14:22.774242 | orchestrator | 2025-09-10 01:14:22 - clean up ports 2025-09-10 01:14:22.954950 | orchestrator | 2025-09-10 01:14:22 - 2fbcb111-65a9-4282-bdd6-de554099db77 2025-09-10 01:14:23.221393 | orchestrator | 2025-09-10 01:14:23 - 3902540c-fcc3-40e3-b8b3-1541ae0b8d15 2025-09-10 01:14:23.723526 | orchestrator | 2025-09-10 01:14:23 - 3b4eb3b4-8de9-4ea8-8c9c-36525fa02cb9 2025-09-10 01:14:23.982141 | orchestrator | 2025-09-10 01:14:23 - 663845bb-a1ed-4fc3-afb1-7ed7d6c8216c 2025-09-10 01:14:24.215783 | orchestrator | 2025-09-10 01:14:24 - 8f563b57-7b67-4326-9286-17a444c34370 2025-09-10 01:14:24.422970 | orchestrator | 2025-09-10 01:14:24 - c0953458-b6a7-4e67-ac17-2eb388eae315 2025-09-10 01:14:24.638326 | orchestrator | 2025-09-10 01:14:24 - e5adc2b3-177f-4112-b06c-c28375e28b84 2025-09-10 01:14:24.838955 | orchestrator | 2025-09-10 01:14:24 - clean up volumes 2025-09-10 01:14:24.951559 | orchestrator | 2025-09-10 01:14:24 - testbed-volume-manager-base 2025-09-10 01:14:24.991506 | orchestrator | 2025-09-10 01:14:24 - testbed-volume-1-node-base 2025-09-10 01:14:25.032019 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-3-node-base 2025-09-10 01:14:25.076595 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-0-node-base 2025-09-10 01:14:25.115818 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-2-node-base 2025-09-10 01:14:25.155613 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-4-node-base 2025-09-10 01:14:25.201867 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-5-node-base 2025-09-10 01:14:25.241774 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-2-node-5 2025-09-10 01:14:25.279315 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-7-node-4 2025-09-10 01:14:25.321472 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-1-node-4 2025-09-10 01:14:25.360427 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-0-node-3 2025-09-10 01:14:25.401757 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-6-node-3 2025-09-10 01:14:25.448730 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-5-node-5 2025-09-10 01:14:25.491752 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-3-node-3 2025-09-10 01:14:25.531661 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-4-node-4 2025-09-10 01:14:25.571763 | orchestrator | 2025-09-10 01:14:25 - testbed-volume-8-node-5 2025-09-10 01:14:25.609659 | orchestrator | 2025-09-10 01:14:25 - disconnect routers 2025-09-10 01:14:25.739505 | orchestrator | 2025-09-10 01:14:25 - testbed 2025-09-10 01:14:26.764113 | orchestrator | 2025-09-10 01:14:26 - clean up subnets 2025-09-10 01:14:26.807384 | orchestrator | 2025-09-10 01:14:26 - subnet-testbed-management 2025-09-10 01:14:27.020791 | orchestrator | 2025-09-10 01:14:27 - clean up networks 2025-09-10 01:14:27.157754 | orchestrator | 2025-09-10 01:14:27 - net-testbed-management 2025-09-10 01:14:27.545193 | orchestrator | 2025-09-10 01:14:27 - clean up security groups 2025-09-10 01:14:27.584016 | orchestrator | 2025-09-10 01:14:27 - testbed-node 2025-09-10 01:14:27.697975 | orchestrator | 2025-09-10 01:14:27 - testbed-management 2025-09-10 01:14:27.820358 | orchestrator | 2025-09-10 01:14:27 - clean up floating ips 2025-09-10 01:14:27.854705 | orchestrator | 2025-09-10 01:14:27 - 81.163.192.31 2025-09-10 01:14:28.717514 | orchestrator | 2025-09-10 01:14:28 - clean up routers 2025-09-10 01:14:28.823588 | orchestrator | 2025-09-10 01:14:28 - testbed 2025-09-10 01:14:29.831661 | orchestrator | ok: Runtime: 0:00:19.657196 2025-09-10 01:14:29.839106 | 2025-09-10 01:14:29.839370 | PLAY RECAP 2025-09-10 01:14:29.839584 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-10 01:14:29.839695 | 2025-09-10 01:14:29.993451 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-10 01:14:29.994523 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-10 01:14:30.733669 | 2025-09-10 01:14:30.733865 | PLAY [Cleanup play] 2025-09-10 01:14:30.750404 | 2025-09-10 01:14:30.750554 | TASK [Set cloud fact (Zuul deployment)] 2025-09-10 01:14:30.806645 | orchestrator | ok 2025-09-10 01:14:30.815964 | 2025-09-10 01:14:30.816129 | TASK [Set cloud fact (local deployment)] 2025-09-10 01:14:30.850529 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:30.861534 | 2025-09-10 01:14:30.861663 | TASK [Clean the cloud environment] 2025-09-10 01:14:31.989554 | orchestrator | 2025-09-10 01:14:31 - clean up servers 2025-09-10 01:14:32.490975 | orchestrator | 2025-09-10 01:14:32 - clean up keypairs 2025-09-10 01:14:32.504458 | orchestrator | 2025-09-10 01:14:32 - wait for servers to be gone 2025-09-10 01:14:32.544374 | orchestrator | 2025-09-10 01:14:32 - clean up ports 2025-09-10 01:14:32.613952 | orchestrator | 2025-09-10 01:14:32 - clean up volumes 2025-09-10 01:14:32.675851 | orchestrator | 2025-09-10 01:14:32 - disconnect routers 2025-09-10 01:14:32.697348 | orchestrator | 2025-09-10 01:14:32 - clean up subnets 2025-09-10 01:14:32.720071 | orchestrator | 2025-09-10 01:14:32 - clean up networks 2025-09-10 01:14:32.893574 | orchestrator | 2025-09-10 01:14:32 - clean up security groups 2025-09-10 01:14:32.925584 | orchestrator | 2025-09-10 01:14:32 - clean up floating ips 2025-09-10 01:14:32.953024 | orchestrator | 2025-09-10 01:14:32 - clean up routers 2025-09-10 01:14:33.398692 | orchestrator | ok: Runtime: 0:00:01.349067 2025-09-10 01:14:33.402600 | 2025-09-10 01:14:33.402752 | PLAY RECAP 2025-09-10 01:14:33.402945 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-10 01:14:33.403013 | 2025-09-10 01:14:33.531406 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-10 01:14:33.533840 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-10 01:14:34.219474 | 2025-09-10 01:14:34.219598 | PLAY [Base post-fetch] 2025-09-10 01:14:34.233518 | 2025-09-10 01:14:34.233624 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-10 01:14:34.277295 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:34.286815 | 2025-09-10 01:14:34.286978 | TASK [fetch-output : Set log path for single node] 2025-09-10 01:14:34.317961 | orchestrator | ok 2025-09-10 01:14:34.326556 | 2025-09-10 01:14:34.326717 | LOOP [fetch-output : Ensure local output dirs] 2025-09-10 01:14:34.729608 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/work/logs" 2025-09-10 01:14:34.974326 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/work/artifacts" 2025-09-10 01:14:35.229595 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dcf21b5b42194a42935d9fb9db71fe30/work/docs" 2025-09-10 01:14:35.243607 | 2025-09-10 01:14:35.243696 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-10 01:14:36.126298 | orchestrator | changed: .d..t...... ./ 2025-09-10 01:14:36.126638 | orchestrator | changed: All items complete 2025-09-10 01:14:36.126696 | 2025-09-10 01:14:36.828286 | orchestrator | changed: .d..t...... ./ 2025-09-10 01:14:37.530626 | orchestrator | changed: .d..t...... ./ 2025-09-10 01:14:37.558312 | 2025-09-10 01:14:37.558535 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-10 01:14:37.593694 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:37.598569 | orchestrator | skipping: Conditional result was False 2025-09-10 01:14:37.621264 | 2025-09-10 01:14:37.621389 | PLAY RECAP 2025-09-10 01:14:37.621472 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-10 01:14:37.621515 | 2025-09-10 01:14:37.746801 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-10 01:14:37.749249 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-10 01:14:38.510661 | 2025-09-10 01:14:38.510877 | PLAY [Base post] 2025-09-10 01:14:38.525333 | 2025-09-10 01:14:38.525458 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-10 01:14:39.792868 | orchestrator | changed 2025-09-10 01:14:39.804873 | 2025-09-10 01:14:39.804995 | PLAY RECAP 2025-09-10 01:14:39.805075 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-10 01:14:39.805157 | 2025-09-10 01:14:39.925843 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-10 01:14:39.930243 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-10 01:14:40.736654 | 2025-09-10 01:14:40.736842 | PLAY [Base post-logs] 2025-09-10 01:14:40.747385 | 2025-09-10 01:14:40.747577 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-10 01:14:41.202122 | localhost | changed 2025-09-10 01:14:41.221107 | 2025-09-10 01:14:41.221290 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-10 01:14:41.258373 | localhost | ok 2025-09-10 01:14:41.263687 | 2025-09-10 01:14:41.263887 | TASK [Set zuul-log-path fact] 2025-09-10 01:14:41.280562 | localhost | ok 2025-09-10 01:14:41.292683 | 2025-09-10 01:14:41.292826 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-10 01:14:41.318346 | localhost | ok 2025-09-10 01:14:41.321283 | 2025-09-10 01:14:41.321386 | TASK [upload-logs : Create log directories] 2025-09-10 01:14:41.814094 | localhost | changed 2025-09-10 01:14:41.817205 | 2025-09-10 01:14:41.817319 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-10 01:14:42.318346 | localhost -> localhost | ok: Runtime: 0:00:00.006725 2025-09-10 01:14:42.324930 | 2025-09-10 01:14:42.325078 | TASK [upload-logs : Upload logs to log server] 2025-09-10 01:14:42.877600 | localhost | Output suppressed because no_log was given 2025-09-10 01:14:42.879643 | 2025-09-10 01:14:42.879747 | LOOP [upload-logs : Compress console log and json output] 2025-09-10 01:14:42.931717 | localhost | skipping: Conditional result was False 2025-09-10 01:14:42.936572 | localhost | skipping: Conditional result was False 2025-09-10 01:14:42.948019 | 2025-09-10 01:14:42.948194 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-10 01:14:42.992514 | localhost | skipping: Conditional result was False 2025-09-10 01:14:42.993242 | 2025-09-10 01:14:42.996347 | localhost | skipping: Conditional result was False 2025-09-10 01:14:43.010136 | 2025-09-10 01:14:43.010373 | LOOP [upload-logs : Upload console log and json output]